title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 6. Configuring metrics for the monitoring stack | Chapter 6. Configuring metrics for the monitoring stack As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks: Create a Prometheus ServiceMonitor CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. 6.1. Configuration for sending metrics to the monitoring stack You can configure the OpenTelemetryCollector custom resource (CR) to create a Prometheus ServiceMonitor CR or a PodMonitor CR for a sidecar deployment. A ServiceMonitor can scrape Collector's internal metrics endpoint and Prometheus exporter metrics endpoints. Example of the OpenTelemetry Collector CR with the Prometheus exporter apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: ":8888" pipelines: metrics: exporters: [prometheus] 1 Configures the Red Hat build of OpenTelemetry Operator to create the Prometheus ServiceMonitor CR or PodMonitor CR to scrape the Collector's internal metrics endpoint and the Prometheus exporter metrics endpoints. Note Setting enableMetrics to true creates the following two ServiceMonitor instances: One ServiceMonitor instance for the <instance_name>-collector-monitoring service. This ServiceMonitor instance scrapes the Collector's internal metrics. One ServiceMonitor instance for the <instance_name>-collector service. This ServiceMonitor instance scrapes the metrics exposed by the Prometheus exporter instances. Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping. Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job 1 The name of the OpenTelemetry Collector CR. 2 The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics . 3 The name of the Prometheus exporter port for the OpenTelemetry Collector. 6.2. Configuration for receiving metrics from the monitoring stack A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: "true" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__="<metric_name>"}' 4 metrics_path: '/federate' static_configs: - targets: - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] 1 Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data. 2 Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver. 3 Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack. 4 Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint. 5 Configures the debug exporter to print the metrics to the standard output. 6.3. Additional resources Querying metrics by using the federation endpoint for Prometheus | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/otel-configuring-metrics-for-monitoring-stack |
Part I. Creating and managing manifests | Part I. Creating and managing manifests A manifest is a set of encrypted files containing information about your subscriptions. You can use a manifest to import your subscriptions into Red Hat Satellite. After the manifest is imported, you can use it to manage RHEL systems and synchronize content. An authorized Satellite user on a connected network can create, export, delete, and modify manifests from Red Hat Hybrid Cloud Console. Note Users on a connected network create and manage their subscription manifests on the Red Hat Hybrid Cloud Console, however, users on a disconnected network must use the Customer Portal. For more information, see Using manifests for a disconnected Satellite Server . | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/assembly-creating-managing-manifests-connected-satellite |
Chapter 1. Migrating your data before an upgrade | Chapter 1. Migrating your data before an upgrade With the release of Red Hat Trusted Profile Analyzer (RHTPA) version 1.2, we implemented a new schema for ingested software bill of materials (SBOM) and vulnerability exploitability exchange (VEX) data. Before upgrading, you must configure the RHTPA 1.2 values file to do a data migration to this new schema for your SBOM and VEX data. This data migration happens during the upgrade process to RHTPA version 1.2. Prerequisites Installation of RHTPA 1.1.2 on Red Hat OpenShift. A new PostgreSQL database. A workstation with the oc , and helm binaries installed. Procedure On your workstation, open a terminal, and log in to OpenShift by using the command-line interface: Syntax oc login --token=TOKEN --server= SERVER_URL_AND_PORT Example Note You can find your login token and URL from the OpenShift web console to use on the command line. Log in to the OpenShift web console. Click your user name, and click Copy login command. Offer your user name and password again, and click Display Token to view the command. Export the RHTPA project namespace: Syntax export NAMESPACE= RHTPA_NAMESPACE Example USD export NAMESPACE=trusted_profile_analyzer Verify that the RHTPA 1.1.2 installation is in the project namespace: Example USD helm list -n USDNAMESPACE Uninstall RHTPA 1.1.2: Example USD helm uninstall redhat-trusted-profile-analyzer -n USDNAMESPACE Open for editing the RHTPA 1.2 values file, and change the following things: Reference the new PostgreSQL database instance. Reference the same simple storage service (S3) storage used for version 1.1.2. Reference the same messaging queues used for version 1.1.2. Set the modules.vexinationCollector.recollectVEX and modules.bombasticCollector.recollectSBOM options to a value of true . Note See the Deployment Guide appendixes for value file templates used with RHTPA deployments on OpenShift. Start the upgrade by using the updated RHTPA 1.2 Helm chart for OpenShift: Syntax helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n USDNAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=USDAPP_DOMAIN_URL Example Note You can run this Helm chart many times to apply the currently configured state from the values file. Verify the data migration was successful. View the SBOM and VEX indexer logs, looking for the Reindexing all documents and Reindexing finished messages: Example USD oc logs bombastic-indexer -n USDNAMESPACE USD oc logs vexination-indexer -n USDNAMESPACE You will also see the following error messages: Error syncing index: Open("Schema error: 'An index exists but the schema does not match.'"), keeping old Error loading initial index: Open("Schema error: 'An index exists but the schema does not match.'") Because of this schema mismatch, the bombastic-collector and vexination-collector pods start the recollect containers to gather all the existing SBOM and VEX data. Both recollect-sbom and recollect-vex init-containers should complete and stop successfully. Once the migration finishes, you can see all your existing SBOM and VEX data in RHTPA console. Additional resources Values file template for Amazon Web Services (AWS) . Values file template for other service providers . | [
"login --token=TOKEN --server= SERVER_URL_AND_PORT",
"oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443",
"export NAMESPACE= RHTPA_NAMESPACE",
"export NAMESPACE=trusted_profile_analyzer",
"helm list -n USDNAMESPACE",
"helm uninstall redhat-trusted-profile-analyzer -n USDNAMESPACE",
"helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n USDNAMESPACE --values PATH_TO_VALUES_FILE --set-string appDomain=USDAPP_DOMAIN_URL",
"helm install redhat-trusted-profile-analyzer openshift-helm-charts/redhat-trusted-profile-analyzer -n USDNAMESPACE --values values-rhtpa.yaml --set-string appDomain=USDAPP_DOMAIN_URL",
"oc logs bombastic-indexer -n USDNAMESPACE oc logs vexination-indexer -n USDNAMESPACE",
"Error syncing index: Open(\"Schema error: 'An index exists but the schema does not match.'\"), keeping old Error loading initial index: Open(\"Schema error: 'An index exists but the schema does not match.'\")"
]
| https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/deployment_guide/migrating-your-data-before-an-upgrade_deploy |
Chapter 1. Overview of jlink | Chapter 1. Overview of jlink Jlink is a Java command line tool that is used to generate a custom Java runtime environment (JRE). You can use your customized JRE to run Java applications. Using jlink, you can create a custom runtime environment that only includes the relevant class file. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_jlink_to_customize_java_runtime_environment/jlink-overview |
8.2. Query/Session Details | 8.2. Query/Session Details Name Description Current Sessions List current connected sessions Current Request List current executing requests Current Transactions List current executing transactions Query Plan Retrieves the query plan for a specific request There are administrative options for terminating sessions, queries, and transactions. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/querysession_details |
Chapter 3. The gofmt formatting tool | Chapter 3. The gofmt formatting tool Instead of a style guide, the Go programming language uses the gofmt code formatting tool. gofmt automatically formats your code according to the Go layout rules. 3.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 3.2. Formatting code You can use the gofmt formatting tool to format code in a given path. When the path leads to a single file, the changes apply only to the file. When the path leads to a directory, all .go files in the directory are processed. Procedure To format your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. Note To print the formatted code to standard output instead of writing it to the original file, omit the -w option. 3.3. Previewing changes to code You can use the gofmt formatting tool to preview changes done by formatting code in a given path. The output in unified diff format is printed to standard output. Procedure To show differences in your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to compare. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to compare. 3.4. Simplifying code You can use the gofmt formatting tool to simplify your code. Procedure To simplify code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to simplify. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to simplify. To apply the changes, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. 3.5. Refactoring code You can use the gofmt formatting tool to refactor your code by applying arbitrary substitutions. Procedure To refactor your code in a given path, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to refactor and < rewrite_rule > with the rule you want it to be rewritten by. To apply the changes, run: On Red Hat Enterprise Linux 8: Replace < code_path > with the path to the code you want to format. On Red Hat Enterprise Linux 9: Replace < code_path > with the path to the code you want to format. 3.6. Additional resources The official gofmt documentation . | [
"gofmt -w < code_path >",
"gofmt -w < code_path >",
"gofmt -d < code_path >",
"gofmt -d < code_path >",
"gofmt -s -w < code_path >",
"gofmt -s -w < code_path >",
"gofmt -w < code_path >",
"gofmt -w < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >",
"gofmt -r -w < rewrite_rule > < code_path >",
"gofmt -w < code_path >",
"gofmt -w < code_path >"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.21.0_toolset/assembly_the-gofmt-formatting-tool_using-go-toolset |
Chapter 1. Overview | Chapter 1. Overview AMQ Ruby is a library for developing messaging applications. It enables you to write Ruby applications that send and receive AMQP messages. Important The AMQ Ruby client is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . AMQ Ruby is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.8 Release Notes . AMQ Ruby is based on the Proton API from Apache Qpid . For detailed API documentation, see the AMQ Ruby API reference . 1.1. Key features An event-driven API that simplifies integration with existing applications SSL/TLS for secure communication Flexible SASL authentication Automatic reconnect and failover Seamless conversion between AMQP and language-native data types Access to all the features and capabilities of AMQP 1.0 1.2. Supported standards and protocols AMQ Ruby supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms supported by Cyrus SASL , including ANONYMOUS, PLAIN, SCRAM, EXTERNAL, and GSSAPI (Kerberos) Modern TCP with IPv6 1.3. Supported configurations AMQ Ruby supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 with Ruby 2.0 Red Hat Enterprise Linux 8 with Ruby 2.5 AMQ Ruby is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Container A top-level container of connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for sending and receiving messages. It contains senders and receivers. Sender A channel for sending messages to a target. It has a target. Receiver A channel for receiving messages from a source. It has a source. Source A named point of origin for messages. Target A named destination for messages. Message An application-specific piece of information. Delivery A message transfer. AMQ Ruby sends and receives messages . Messages are transferred between connected peers over senders and receivers . Senders and receivers are established over sessions . Sessions are established over connections . Connections are established between two uniquely identified containers . Though a connection can have multiple sessions, often this is not needed. The API allows you to ignore sessions unless you require them. A sending peer creates a sender to send messages. The sender has a target that identifies a queue or topic at the remote peer. A receiving peer creates a receiver to receive messages. The receiver has a source that identifies a queue or topic at the remote peer. The sending of a message is called a delivery . The message is the content sent, including all metadata such as headers and annotations. The delivery is the protocol exchange associated with the transfer of that content. To indicate that a delivery is complete, either the sender or the receiver settles it. When the other side learns that it has been settled, it will no longer communicate about that delivery. The receiver can also indicate whether it accepts or rejects the message. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir> | [
"cd <project-dir>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/overview |
7.3. Run Red Hat JBoss Data Grid in Clustered Mode | 7.3. Run Red Hat JBoss Data Grid in Clustered Mode Clustered mode refers to a cluster made up of two or more Red Hat JBoss Data Grid instances. Run the following script to start JBoss Data Grid in clustered mode: This command starts JBoss Data Grid using the default configuration information provided in the USDJDG_HOME/standalone/configuration/ clustered.xml file. Report a bug | [
"USDJDG_HOME/bin/clustered.sh"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/run_red_hat_jboss_data_grid_in_clustered_mode |
Chapter 4. Configuring your Logging deployment | Chapter 4. Configuring your Logging deployment 4.1. About the Cluster Logging custom resource To configure OpenShift Logging, you customize the ClusterLogging custom resource (CR). 4.1.1. About the ClusterLogging custom resource To make changes to your OpenShift Logging environment, create and modify the ClusterLogging custom resource (CR). Instructions for creating or modifying a CR are provided in this documentation as appropriate. The following example shows a typical custom resource for OpenShift Logging. Sample ClusterLogging custom resource (CR) apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" 1 namespace: "openshift-logging" 2 spec: managementState: "Managed" 3 logStore: type: "elasticsearch" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: 5 type: "kibana" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: "fluentd" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi 1 The CR name must be instance . 2 The CR must be installed to the openshift-logging namespace. 3 The Red Hat OpenShift Logging Operator management state. When set to unmanaged the operator is in an unsupported state and will not get updates. 4 Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class. 5 Settings for the visualizer, including the resource requests and limits, and the number of pod replicas. 6 Settings for the log collector, including the resource requests and limits. 4.2. Configuring the logging collector OpenShift Container Platform uses Fluentd to collect operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. You can configure the CPU and memory limits for the log collector and move the log collector pods to specific nodes . All supported modifications to the log collector can be performed though the spec.collection.log.fluentd stanza in the ClusterLogging custom resource (CR). 4.2.1. About unsupported configurations The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 4.2.2. Viewing logging collector pods You can view the Fluentd logging collector pods and the corresponding nodes that they are running on. The Fluentd logging collector pods run only in the openshift-logging project. Procedure Run the following command in the openshift-logging project to view the Fluentd logging collector pods and their details: USD oc get pods --selector component=fluentd -o wide -n openshift-logging Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 4.2.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 4.2.4. Advanced configuration for the log forwarder OpenShift Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: the size of Fluentd chunks and chunk buffer the Fluentd chunk flushing behavior the Fluentd chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in OpenShift Container Platform, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In OpenShift Container Platform, you cannot change the indefinite retry behavior. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 4.1. Advanced Fluentd Configuration Parameters Parmeter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. 8G flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/fluentd --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer> 4.2.5. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: logs: type: "fluentd" fluentd: {} Verify that the Fluentd pods are redeployed: USD oc get pods -n openshift-logging Additional resources Forwarding logs to third-party systems 4.3. Configuring the log store OpenShift Container Platform uses Elasticsearch 6 (ES) to store and organize the log data. You can make modifications to your log store, including: storage for your Elasticsearch cluster shard replication across data nodes in the cluster, from full replication to no replication external access to Elasticsearch data Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16G of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory, up to a maximum of 64G for each Elasticsearch node. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 4.3.1. Forward audit logs to the log store Because the internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs, by default audit logs are not stored in the internal Elasticsearch instance. If you want to send the audit logs to the internal log store, for example to view the audit logs in Kibana, you must use the Log Forward API. Important The internal OpenShift Container Platform Elasticsearch log store does not provide secure storage for audit logs. We recommend you ensure that the system to which you forward audit logs is compliant with your organizational and governmental regulations and is properly secured. OpenShift Logging does not comply with those regulations. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create a ClusterLogForwarder CR YAML file or edit your existing CR: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources For more information on the Log Forwarding API, see Forwarding logs using the Log Forwarding API . 4.3.2. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites OpenShift Logging and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Container Platform checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Container Platform deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Container Platform to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Container Platform checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 4.3.3. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 4.3.4. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit clusterlogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 4.3.5. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 4.3.6. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 4.3.7. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 4.3.8. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the Fluentd pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/logging-fluentd -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-fluentd": "false"}}}}}' Perform a shard synced flush using the OpenShift Container Platform es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Container Platform es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Container Platform Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the Fluentd pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/logging-fluentd -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-fluentd": "true"}}}}}' 4.3.9. Exposing the log store service as a route By default, the log store that is deployed with OpenShift Logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Container Platform token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites OpenShift Logging and Elasticsearch must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 4.4. Configuring the log visualizer OpenShift Container Platform uses Kibana to display the log data collected by OpenShift Logging. You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. 4.4.1. Configuring CPU and memory limits The OpenShift Logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 4.4.2. Scaling redundancy for the log visualizer nodes You can scale the pod that hosts the log visualizer for redundancy. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: visualization: type: "kibana" kibana: replicas: 1 1 1 Specify the number of Kibana nodes. 4.5. Configuring OpenShift Logging storage Elasticsearch is a memory-intensive application. The default OpenShift Logging installation deploys 16G of memory for both memory requests and memory limits. The initial set of OpenShift Container Platform nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Container Platform cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments. 4.5.1. Storage considerations for OpenShift Logging and OpenShift Container Platform A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Container Platform this is achieved using persistent volume claims. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/ to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 4.5.2. Additional resources Configuring persistent storage for the log store 4.6. Configuring CPU and memory limits for OpenShift Logging components You can configure both the CPU and memory limits for each of the OpenShift Logging components as needed. 4.6.1. Configuring CPU and memory limits The OpenShift Logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: "fluentd" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 4.7. Using tolerations to control OpenShift Logging pod placement You can use taints and tolerations to ensure that OpenShift Logging pods run on specific nodes and that no other workload can run on those nodes. Taints and tolerations are simple key:value pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint. The key is any string, up to 253 characters and the value is any string up to 63 characters. The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. Sample OpenShift Logging CR with tolerations apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 tolerations: 1 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: tolerations: 2 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: "fluentd" fluentd: tolerations: 3 - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi 1 This toleration is added to the Elasticsearch pods. 2 This toleration is added to the Kibana pod. 3 This toleration is added to the logging collector pods. 4.7.1. Using tolerations to control the log store pod placement You can control which nodes the log store pods runs on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log store pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the log store pods can run on that node. By default, the log store pods have the following toleration: tolerations: - effect: "NoExecute" key: "node.kubernetes.io/disk-pressure" operator: "Exists" Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the OpenShift Logging pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 elasticsearch=node:NoExecute This example places a taint on node1 that has key elasticsearch , value node , and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that match the taint and remove existing pods that do not match. Edit the logstore section of the ClusterLogging CR to configure a toleration for the Elasticsearch pods: logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 tolerations: - key: "elasticsearch" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require a taint with the key elasticsearch to be present on the Node. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration could be scheduled onto node1 . 4.7.2. Using tolerations to control the log visualizer pod placement You can control the node where the log visualizer pod runs and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to the log visualizer pod through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not tolerate the taint. Using a specific key:value pair that is not on other pods ensures only the Kibana pod can run on that node. Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Use the following command to add a taint to a node where you want to schedule the log visualizer pod: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 kibana=node:NoExecute This example places a taint on node1 that has key kibana , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and remove existing pods that do not match. Edit the visualization section of the ClusterLogging CR to configure a toleration for the Kibana pod: visualization: type: "kibana" kibana: tolerations: - key: "kibana" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 4.7.3. Using tolerations to control the log collector pod placement You can ensure which nodes the logging collector pods run on and prevent other workloads from using those nodes by using tolerations on the pods. You apply tolerations to logging collector pods through the ClusterLogging custom resource (CR) and apply taints to a node through the node specification. You can use taints and tolerations to ensure the pod does not get evicted for things like memory and CPU issues. By default, the logging collector pods have the following toleration: tolerations: - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoExecute" Prerequisites OpenShift Logging and Elasticsearch must be installed. Procedure Use the following command to add a taint to a node where you want logging collector pods to schedule logging collector pods: USD oc adm taint nodes <node-name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 collector=node:NoExecute This example places a taint on node1 that has key collector , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods: collection: logs: type: "fluentd" fluentd: tolerations: - key: "collector" 1 operator: "Exists" 2 effect: "NoExecute" 3 tolerationSeconds: 6000 4 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration would be able to schedule onto node1 . 4.7.4. Additional resources Controlling pod placement using node taints . 4.8. Moving OpenShift Logging resources with node selectors You can use node selectors to deploy the Elasticsearch and Kibana pods to different nodes. 4.8.1. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites OpenShift Logging and Elasticsearch must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s 4.9. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 4.9.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a journald.conf file with the required settings: Compress=yes 1 ForwardToConsole=no 2 ForwardToSyslog=no MaxRetentionSec=1month 3 RateLimitBurst=10000 4 RateLimitIntervalSec=30s Storage=persistent 5 SyncIntervalSec=1s 6 SystemMaxUse=8G 7 SystemKeepFree=20% 8 SystemMaxFileSize=10M 9 1 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 2 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKsmg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 3 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 4 Configure rate limiting. If, during the time interval defined by RateLimitIntervalSec , more logs than specified in RateLimitBurst are received, all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 5 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /var/log/journal/ . persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 6 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 7 Specify the maximum size the journal can use. The default is 8G . 8 Specify how much disk space systemd must leave free. The default is 20% . 9 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Convert the journal.conf file to base64 and store it in a variable that is named jrnl_cnf by running the following command: USD export jrnl_cnf=USD( cat journald.conf | base64 -w0 ) Create a MachineConfig object that includes the jrnl_cnf variable, which was created in the step. The following sample command creates a MachineConfig object for the worker: USD cat << EOF > ./40-worker-custom-journald.yaml 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 2 name: 40-worker-custom-journald 3 spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} 4 verification: {} filesystem: root mode: 0644 5 path: /etc/systemd/journald.conf.d/custom.conf osImageURL: "" EOF 1 Optional: For control plane (also known as master) node, you can provide the file name as 40-master-custom-journald.yaml . 2 Optional: For control plane (also known as master) node, provide the role as master . 3 Optional: For control plane (also known as master) node, you can provide the name as 40-master-custom-journald . 4 Optional: To include a static copy of the parameters in the journald.conf file, replace USD{jrnl_cnf} with the output of the echo USDjrnl_cnf command. 5 Set the permissions for the journal.conf file. It is recommended to set 0644 permissions. Create the machine config by running the following command: USD oc apply -f <file_name>.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node by running the following command: USD oc describe machineconfigpool/<node> 1 1 Specify the node as master or worker . Example output for worker Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e 4.10. Maintenance and support 4.10.1. About unsupported configurations The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . 4.10.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the unmanaged state to modify the following components: The Elasticsearch CR The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the unmanaged state to modify the following component: the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 4.10.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. | [
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc get pods --selector component=fluentd -o wide -n openshift-logging",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -n openshift-logging",
"oc extract configmap/fluentd --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}",
"oc get pods -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods | grep elasticsearch-",
"oc -n openshift-logging patch daemonset/logging-fluentd -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-fluentd\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods | grep elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/logging-fluentd -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-fluentd\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 elasticsearch=node:NoExecute",
"logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"Compress=yes 1 ForwardToConsole=no 2 ForwardToSyslog=no MaxRetentionSec=1month 3 RateLimitBurst=10000 4 RateLimitIntervalSec=30s Storage=persistent 5 SyncIntervalSec=1s 6 SystemMaxUse=8G 7 SystemKeepFree=20% 8 SystemMaxFileSize=10M 9",
"export jrnl_cnf=USD( cat journald.conf | base64 -w0 )",
"cat << EOF > ./40-worker-custom-journald.yaml 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 2 name: 40-worker-custom-journald 3 spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} 4 verification: {} filesystem: root mode: 0644 5 path: /etc/systemd/journald.conf.d/custom.conf osImageURL: \"\" EOF",
"oc apply -f <file_name>.yaml",
"oc describe machineconfigpool/<node> 1",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/configuring-your-logging-deployment |
Chapter 9. Federal Information Processing Standard on Red Hat OpenStack Platform | Chapter 9. Federal Information Processing Standard on Red Hat OpenStack Platform Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . The Federal Information Processing Standards (FIPS) is a set of security requirements developed by the National Institute of Standards and Technology (NIST). In Red Hat Enterprise Linux 9, the supported standard is FIPS publication 140-3: Security Requirements for Cryptographic Modules . For details about the supported standard, see the Federal Information Processing Standards Publication 140-3 . These security requirements define acceptable cryptographic algorithms and the use of those cryptographic algorithms, including security modules. FIPS 140-3 validation is achieved by using only those cryptographic algorithms approved through FIPS, in the manner prescribed, and through validated modules. FIPS 140-3 compatibility is achieved by using only those cryptographic algorithms approved through FIPS. Red Hat OpenStack Platform 17 is FIPS 140-3 compatible . You can take advantage of FIPS compatibility by using images provided by Red Hat to deploy your overcloud. Note OpenStack 17.1 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. Red Hat expects, though cannot commit to a specific timeframe, to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Updates will be available in Compliance Activities and Government Standards . 9.1. Enabling FIPS When you enable FIPS, you must complete a series of steps during the installation of the undercloud and overcloud. Prerequisites You have installed Red Hat Enterprise Linux and are prepared to begin the installation of Red Hat OpenStack Platform director. Procedure Enable FIPS on the undercloud: Enable FIPS on the system on which you plan to install the undercloud: Note This step will add the fips=1 kernel parameter to your GRUB configuration file. As a result, only cryptographic algorithms modules used by Red Hat Enterprise Linux are in FIPS mode and only cryptographic algorithms approved by the standard are used. Reboot the system. Verify that FIPS is enabled: Install and configure Red Hat OpenStack Platform director. For more information see: Installing director on the undercloud . Prepare FIPS-enabled images for the overcloud. Install images for the overcloud: Create the images directory in the home directory of the stack user: Extract the images to your home directory: You must create symlinks before uploading the images: Upload the FIPS-enabled overcloud images to the Image service: Note You must use the --update-existing flag even if there are no images currently in the OpenStack Image service. Enable FIPS on the overcloud. Configure templates for an overcloud deployment specific to your environment. Include all configuration templates in the deployment command, including fips.yaml: | [
"fips-mode-setup --enable",
"fips-mode-setup --check",
"sudo dnf -y install rhosp-director-images-uefi-fips-x86_64",
"mkdir /home/stack/images cd /home/stack/images",
"for i in /usr/share/rhosp-director-images/*fips*.tar; do tar -xvf USDi; done",
"ln -s ironic-python-agent-fips.initramfs ironic-python-agent.initramfs ln -s ironic-python-agent-fips.kernel ironic-python-agent.kernel ln -s overcloud-hardened-uefi-full-fips.qcow2 overcloud-hardened-uefi-full.qcow2",
"openstack overcloud image upload --update-existing --whole-disk",
"openstack overcloud deploy -e /usr/share/openstack-tripleo-heat-templates/environents/fips.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly-fips_security_and_hardening |
Chapter 15. Infrastructure services | Chapter 15. Infrastructure services The following chapter contains the most notable changes to infrastructure services between RHEL 8 and RHEL 9. 15.1. Notable changes to infrastructure services Support for Berkeley DB dynamic back end has been removed With this release, the Berkeley DB ( libdb ) dynamic back end is no longer supported. The named-sdb build is no longer provided. You can use the DLZ loadable plugins for each back end, for example, sqlite3 or mysql . Those plugins are not built or shipped and have to be built from the source. The mod_php module provided with PHP for use with the Apache HTTP Server has been removed The mod_php module provided with PHP for use with the Apache HTTP Server is no longer available in RHEL 9. Since RHEL 8, PHP scripts are run using the FastCGI Process Manager ( php-fpm ) by default. For more information, see Using PHP with the Apache HTTP Server . The BIND 9.18 is now supported in RHEL BIND 9.18 has been added in RHEL 9.5 in the new bind9.18 package. The notable feature enhancements include the following: Added support for DNS over TLS (DoT) and DNS over HTTPS (DoH) in the `named`daemon Added support for both incoming and outgoing zone transfers over TLS Improved support for OpenSSL 3.0 interfaces New configuration options for tuning TCP and UDP send and receive buffers Various improvements to the dig utility | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_infrastructure-services_considerations-in-adopting-rhel-9 |
Chapter 22. Samba | Chapter 22. Samba Samba uses the SMB protocol to share files and printers across a network connection. Operating systems that support this protocol include Microsoft Windows, OS/2, and Linux. The Red Hat Enterprise Linux 4 kernel contains Access Control List (ACL) support for ext3 file systems. If the Samba server shares an ext3 file system with ACLs enabled for it, and the kernel on the client system contains support for reading ACLs from ext3 file systems, the client automatically recognizes and uses the ACLs. Refer to Chapter 14, Access Control Lists for more information on ACLs. 22.1. Why Use Samba? Samba is useful if you have a network of both Windows and Linux machines. Samba allows files and printers to be shared by all the systems in a network. To share files between Linux machines only, use NFS as discussed in Chapter 21, Network File System (NFS) . To share printers between Linux machines only, you do not need to use Samba; refer to Chapter 33, Printer Configuration . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Samba |
Chapter 18. The web console | Chapter 18. The web console 18.1. The web console is now available by default Packages for the RHEL 8 web console, also known as Cockpit, are now part of Red Hat Enterprise Linux default repositories, and can therefore be immediately installed on a registered RHEL 8 system. In addition, on a non-minimal installation of RHEL 8, the web console is automatically installed and firewall ports required by the console are automatically open. A system message has also been added prior to login that provides information about how to enable or access the web console. 18.2. New firewall interface The Networking tab in the RHEL 8 web console now includes the Firewall settings. In this section, users can: Enable/disable firewall Add/remove services For details, see Managing firewall using the web console . 18.3. Subscription management The RHEL 8 web console provides an interface for using Red Hat Subscription Manager installed on your local system. The Subscription Manager connects to the Red Hat Customer Portal and verifies all available: Active subscriptions Expired subscriptions Renewed subscriptions If you want to renew the subscription or get a different one in Red Hat Customer Portal, you do not have to update the Subscription Manager data manually. The Subscription Manager synchronizes data with Red Hat Customer Portal automatically. Note The web console's Subscriptions page is now provided by the new subscription-manager-cockpit package. For details, see Managing subscriptions in the web console . 18.4. Better IdM integration for the web console If your system is enrolled in an Identity Management (IdM) domain, the RHEL 8 web console now uses the domain's centrally managed IdM resources by default. This includes the following benefits: The IdM domain's administrators can use the web console to manage the local machine. The console's web server automatically switches to a certificate issued by the IdM certificate authority (CA) and accepted by browsers. Users with a Kerberos ticket in the IdM domain do not need to provide login credentials to access the web console. SSH hosts known to the IdM domain are accessible to the web console without manually adding an SSH connection. Note that for IdM integration with the web console to work properly, the user first needs to run the ipa-advise utility with the enable-admins-sudo option in the IdM server. 18.5. The web console is now compatible with mobile browsers With this update, the web console menus and pages can be navigated on mobile browser variants. This makes it possible to manage systems using the RHEL 8 web console from a mobile device. 18.6. The web console front page now displays missing updates and subscriptions If a system managed by the RHEL 8 web console has outdated packages or a lapsed subscription, a warning is now displayed on the web console front page of the system. 18.7. The web console now supports PBD enrollment With this update, you can use the RHEL 8 web console interface to apply Policy-Based Decryption (PBD) rules to disks on managed systems. This uses the Clevis decryption client to facilitate a variety of security management functions in the web console, such as automatic unlocking of LUKS-encrypted disk partitions. 18.8. Support LUKS v2 In the web console's Storage tab, you can now create, lock, unlock, resize, and otherwise configure encrypted devices using the LUKS (Linux Unified Key Setup) version 2 format. This new version of LUKS offers: More flexible unlocking policies Stronger cryptography Better compatibility with future changes 18.9. Virtual machines can now be managed using the web console The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the user to create and manage libvirt-based virtual machines. For information about the differences in virtual management features between the web console and the Virtual Machine Manager, see Differences in virtualization features in Virtual Machine Manager and the web console . 18.10. Internet Explorer unsupported by the web console Support for the Internet Explorer browser has been removed from the RHEL 8 web console. Attempting to open the web console in Internet Explorer now displays an error screen with a list of recommended browsers that can be used instead. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/the-web-console_considerations-in-adopting-rhel-8 |
Chapter 2. Introduction to Red Hat Certificate System | Chapter 2. Introduction to Red Hat Certificate System Every common PKI operation, such as issuing, renewing, and revoking certificates; archiving and recovering keys; publishing CRLs and verifying certificate status, is carried out by interoperating subsystems within Red Hat Certificate System. The functions of each individual subsystem and the way that they work together to establish a robust and local PKI is described in this chapter. 2.1. A review of Certificate System subsystems Red Hat Certificate System provides five different subsystems, each focusing on different aspects of a PKI deployment: A certificate authority called Certificate Manager . The CA is the core of the PKI; it issues and revokes all certificates. The Certificate Manager is also the core of the Certificate System. By establishing a security domain of trusted subsystems, it establishes and manages relationships between the other subsystems. A key recovery authority (KRA). Certificates are created based on a specific and unique key pair. If a private key is ever lost, then the data which that key was used to access (such as encrypted emails) is also lost because it is inaccessible. The KRA stores key pairs, so that a new, identical certificate can be generated based on recovered keys, and all of the encrypted data can be accessed even after a private key is lost or damaged. Note In versions of Certificate System, KRA was also referred to as the data recovery manager (DRM). Some code, configuration file entries, web panels, and other resources might still use the term DRM instead of KRA. An online certificate status protocol (OCSP) responder. The OCSP verifies whether a certificate is valid and not expired. This function can also be done by the CA, which has an internal OCSP service, but using an external OCSP responder lowers the load of the issuing CA. A token key service (TKS). The TKS derives keys based on the token CCID, private information, and a defined algorithm. These derived keys are used by the TPS to format tokens and enroll certificates on the token. A token processing system (TPS). The TPS interacts directly with external tokens, like smart cards, and manages the keys and certificates on those tokens through a local client, the Enterprise Security Client (ESC). The ESC contacts the TPS when there is a token operation, and the TPS interacts with the CA, KRA, or TKS, as required, then send the information back to the token by way of the Enterprise Security Client. Even with all possible subsystems installed, the core of the Certificate System is still the CA (or CAs), since they ultimately process all certificate-related requests. The other subsystems connect to the CA or CAs likes spokes in a wheel. These subsystems work together, in tandem, to create a public key infrastructure (PKI). Depending on what subsystems are installed, a PKI can function in one (or both) of two ways: A token management system or TMS environment, which manages smart cards. This requires a CA, TKS, and TPS, with an optional KRA for server-side key generation. A traditional non token management system or non-TMS environment, which manages certificates used in an environment other than smart cards, usually in software databases. At a minimum, a non-TMS requires only a CA, but a non-TMS environment can use OCSP responders and KRA instances as well. Note Red Hat Certificate System includes Technology Preview code (e.g., EST). This early access to upcoming product functionality is not evaluated and not to be used in the evaluated configuration. 2.2. Overview of Certificate System subsystems 2.2.1. Separate versus shared instances Red Hat Certificate System supports deployment of separate PKI instances for all subsystems: Separate PKI instances run as a single Java-based Apache Tomcat instance. Separate PKI instances contain a single PKI subsystem (CA, KRA, OCSP, TKS, or TPS). Separate PKI instances must utilize unique ports if co-located on the same physical machine or virtual machine (VM). Alternatively, Certificate System supports deployment of a shared PKI instance: Shared PKI instances also run as a single Java-based Apache Tomcat instance. Shared PKI instances that contain a single PKI subsystem are identical to a separate PKI instance. Shared PKI instances may contain any combination of up to one of each type of PKI subsystem: CA only TKS only CA and KRA CA and OCSP TKS and TPS CA, KRA, TKS, and TPS CA, KRA, OCSP, TKS, and TPS etc. Shared PKI instances allow all of their subsystems contained within that instance to share the same ports. Shared PKI instances must utilize unique ports if more than one is co-located on the same physical machine or VM. 2.2.2. Instance installation prerequisites 2.2.2.1. Directory Server instance availability Prior to installation of a Certificate System instance, a local or remote Red Hat Directory Server LDAP instance must be available. For instructions on installing Red Hat Directory Server, see the Red Hat Directory Server Installation Guide . 2.2.2.2. PKI packages Red Hat Certificate System is composed of packages listed below: The following base packages form the core of Certificate System and are available in base Red Hat Enterprise Linux repositories: pki-core pki-base pki-base-java pki-ca pki-javadoc pki-kra pki-server pki-symkey pki-tools The packages listed below are not available in the base Red Hat Enterprise Linux subscription channel. To install these packages, you must attach a Red Hat Certificate System subscription pool and enable the RHCS repository. For more information, see Section 6.2, "Enabling the repositories" . pki-core pki-console pki-ocsp pki-tks pki-tps redhat-pki redhat-pki : contains all the packages of the pki-core module. If you wish to pick redhat-pki packages individually, it is advised to disable the pki-core module. redhat-pki-console-theme redhat-pki-server-theme Use a Red Hat Enterprise Linux 8 system (optionally, use one that has been configured with a supported Hardware Security Module listed in Chapter 4, Supported platforms ), and make sure that all packages are up to date before installing Red Hat Certificate System. To install all Certificate System packages (with the exception of pki-javadoc ), use dnf to install the redhat-pki metapackage: Alternatively, you can install one or more of the top level PKI subsystem packages as required; see the list above for exact package names. If you use this approach, make sure to also install the redhat-pki-server-theme package, and optionally redhat-pki-console-theme and pki-console to use the PKI Console. Finally, developers and administrators may also want to install the JSS and PKI javadocs (the jss-javadoc and pki-javadoc ). Note The jss-javadoc package requires you to enable the Server-Optional repository in Subscription Manager . 2.2.2.3. Instance installation and configuration The pkispawn command line tool is used to install and configure a new PKI instance. It eliminates the need for separate installation and configuration steps, and may be run either interactively, as a batch process, or a combination of both (batch process with prompts for passwords). The utility does not provide a way to install or configure the browser-based graphical interface. For usage information, use the pkispawn --help command. The pkispawn command: Reads in its default name=value pairs from a plain text configuration file ( /etc/pki/default.cfg ). Interactively or automatically overrides any pairs as specified and stores the final result as a Python dictionary. Executes an ordered series of scriptlets to perform subsystem and instance installation. The configuration scriptlet packages the Python dictionary as a JavaScript Object Notation (JSON) data object, which is then passed to the Java-based configuration servlet. The configuration servlet utilizes this data to configure a new PKI subsystem, and then passes control back to the pkispawn executable, which finalizes the PKI setup. A copy of the final deployment file is stored in /var/lib/pki/instance_name/<subsystem>/registry/<subsystem>/deployment.cfg See the pkispawn man page for additional information. The default configuration file, /etc/pki/default.cfg , is a plain text file containing the default installation and configuration values which are read at the beginning of the process described above. It consists of name=value pairs divided into [DEFAULT] , [Tomcat] , [CA] , [KRA] , [OCSP] , [TKS] , and [TPS] sections. If you use the -s option with pkispawn and specify a subsystem name, then only the section for that subsystem will be read. The sections have a hierarchy: a name=value pair specified in a subsystem section will override the pair in the [Tomcat] section, which in turn override the pair in the [DEFAULT] section. Default pairs can further be overriden by interactive input, or by pairs in a specified PKI instance configuration file. Note Whenever non-interactive files are used to override default name=value pairs, they may be stored in any location and specified at any time. These files are referred to as myconfig.txt in the pkispawn man pages, but they are also often referred to as .ini files, or more generally as PKI instance configuration override files. See the pki_default.cfg man page for more information. The Configuration Servlet consists of Java bytecode stored in /usr/share/java/pki/pki-certsrv.jar as com/netscape/certsrv/system/ConfigurationRequest.class . The servlet processes data passed in as a JSON object from the configuration scriptlet using pkispawn , and then returns to pkispawn using Java bytecode served in the same file as com/netscape/certsrv/system/ConfigurationResponse.class . An example of an interactive installation only involves running the pkispawn command on a command line as root : Important Interactive installation currently only exists for very basic deployments. For example, deployments intent upon using advanced features such as cloning, Elliptic Curve Cryptography (ECC), external CA, Hardware Security Module (HSM), subordinate CA, and others, must provide the necessary override parameters in a separate configuration file. A non-interactive installation requires a PKI instance configuration override file, and the process may look similar to the following example: Create the pki directory: Use a text editor such as vim to create a configuration file named /root/pki/ca.cfg with the following contents: See the pkispawn man page for various configuration examples. 2.2.2.4. Instance removal To remove an existing PKI instance, use the pkidestroy command. It can be run interactively or as a batch process. Use pkidestroy -h to display detailed usage inforamtion on the command line. The pkidestroy command reads in a PKI subsystem deployment configuration file which was stored when the subsystem was created ( /var/lib/pki/ instance_name / <subsystem> /registry/ <subsystem> /deployment.cfg ), uses the read-in file in order to remove the PKI subsystem, and then removes the PKI instance if it contains no additional subsystems. See the pkidestroy man page for more information. An interactive removal procedure using pkidestroy may look similar to the following: A non-interactive removal procedure may look similar to the following example: 2.2.3. Execution management (systemctl) 2.2.3.1. Starting, stopping, restarting, and obtaining status Red Hat Certificate System subsystem instances can be stopped and started using the systemctl execution management system tool on Red Hat Enterprise Linux 8: <unit-file> has one of the following values: For more details on the watchdog service, refer to Section 2.3.10, "Passwords and watchdog (nuxwdog)" and Section 9.3.2, "Using the Certificate System watchdog service" . Note In RHCS 10, these systemctl actions support the pki-server alias: pki-server <command> subsystem_instance_name is the alias for systemctl <command> pki-tomcatd@<instance>.service . 2.2.3.2. Starting the instance automatically The systemctl utility in Red Hat Enterprise Linux manages the automatic startup and shutdown settings for each process on the server. This means that when a system reboots, some services can be automatically restarted. System unit files control service startup to ensure that services are started in the correct order. The systemd service and systemctl utility are described in the Configuring basic system settings guide for Red Hat Enterprise Linux 8. Certificate System instances can be managed by systemctl , so this utility can set whether to restart instances automatically. After a Certificate System instance is created, it is enabled on boot. This can be changed by using systemctl : To re-enable the instance: Note The systemctl enable and systemctl disable commands do not immediately start or stop Certificate System. 2.2.4. Process management (pki-server and pkidaemon) 2.2.4.1. The pki-server command line tool The primary process management tool for Red Hat Certificate System is pki-server . Use the pki-server --help command and see the pki-server man page for usage information. The pki-server command-line interface (CLI) manages local server instances (for example server configuration or system certificates). Invoke the CLI as follows: The CLI uses the configuration files and NSS database of the server instance, therefore the CLI does not require any prior initialization. Since the CLI accesses the files directly, it can only be executed by the root user, and it does not require client certificate. Also, the CLI can run regardless of the status of the server; it does not require a running server. The CLI supports a number of commands organized in a hierarchical structure. To list the top-level commands, execute the CLI without any additional commands or parameters: Some commands have subcommands. To list them, execute the CLI with the command name and no additional options. For example: To view command usage information, use the --help option: 2.2.4.2. Enabling and disabling an installed subsystem using pki-server To enable or disable an installed subsystem, use the pki-server utility. Replace subsystem_id with a valid subsystem identifier: ca , kra , tks , ocsp , or tps . Note One instance can have only one of each type of subsystem. For example, to disable the OCSP subsystem on an instance named pki-tomcat : To list the installed subsystems for an instance: To show the status of a particular subsystem: 2.2.4.3. The pkidaemon command line tool Another process management tool for Red Hat Certificate System is the pkidaemon tool: pkidaemon status tomcat - Provides status information such as on/off, ports, URLs of each PKI subsystem of all PKI instances on the system. pkidaemon status tomcat instance_name - Provides status information such as on/off, ports, URLs of each PKI subsystem of a specific instance. pkidaemon start tomcat instance_name .service - Used internally using systemctl . See the pkidaemon man page for additional information. 2.2.4.4. Finding the subsystem web services URLs The CA, KRA, OCSP, TKS, and TPS subsystems have web services pages for agents, as well as regular users and administrators, when appropriate. These web services can be accessed by opening the URL to the subsystem host over the subsystem's secure end user's port. For example, for the CA: Note To get a complete list of all of the interfaces, URLs, and ports for an instance, check the status of the service. For example: The main web services page for each subsystem has a list of available services pages; these are summarized in the below table. To access any service specifically, access the appropriate port and append the appropriate directory to the URL. For example, to access the CA's end entities (regular users) web services: If DNS is not configured, then an IPv4 or IPv6 address can be used to connect to the services pages. For example: Note Anyone can access the end user pages for a subsystem. However, accessing agent or admin web services pages requires that an agent or administrator certificate be issued and installed in the web browser. Otherwise, authentication to the web services fails. Table 2.1. Default web services pages Port Used for SSL/TLS Used for Client Authentication [a] Web Services Web Service Location Certificate Manager 8080 No End Entities ca/ee/ca 8443 Yes No End Entities ca/ee/ca 8443 Yes Yes Agents ca/agent/ca 8443 Yes No Services ca/services 8443 Yes No Console pkiconsole https:// host:port /ca Key Recovery Authority 8080 No End Entities kra/ee/kra 8443 Yes No End Entities kra/ee/kra 8443 Yes Yes Agents kra/agent/kra 8443 Yes No Services kra/services 8443 Yes No Console pkiconsole https:// host:port /kra Online Certificate Status Manager 8080 No End Entities ocsp/ee/ocsp 8443 Yes No End Entities ocsp/ee/ocsp 8443 Yes Yes Agents ocsp/agent/ocsp 8443 Yes No Services ocsp/services 8443 Yes No Console pkiconsole https:// host:port /ocsp Token Key Service 8080 No End Entities tks/ee/tks 8443 Yes No End Entities tks/ee/tks 8443 Yes Yes Agents tks/agent/tks 8443 Yes No Services tks/services 8443 Yes No Console pkiconsole https:// host:port /tks Token Processing System 8080 No Unsecure Services tps/tps 8443 Yes Secure Services tps/tps 8080 No Enterprise Security Client Phone Home tps/phoneHome 8443 Yes Enterprise Security Client Phone Home tps/phoneHome 8443 Yes Yes Admin, Agent, and Operator Services [b] tps/ui [a] Services with a client authentication value of No can be reconfigured to require client authentication. Services which do not have either a Yes or No value cannot be configured to use client authentication. [b] The agent, admin, and operator services are all accessed through the same web services page. Each role can only access specific sections which are only visible to the members of that role. 2.2.4.5. Starting the Certificate System console Important This console is being deprecated. The CA, KRA, OCSP, and TKS subsystems have a Java interface which can be accessed to perform administrative functions. For the KRA, OCSP, and TKS, this includes very basic tasks like configuring logging and managing users and groups. For the CA, this includes other configuration settings such as creating certificate profiles and configuring publishing. The Console is opened by connecting to the subsystem instance over its SSL/TLS port using the pkiconsole utility. This utility uses the format: The subsystem_type can be ca , kra , ocsp , or tks . For example, this opens the KRA console: If DNS is not configured, then an IPv4 or IPv6 address can be used to connect to the console. For example: 2.3. Certificate System architecture overview Although each provides a different service, all Red Hat Certificate System subsystems (CA, KRA, OCSP, TKS, TPS) share a common architecture. The following architectural diagram shows the common architecture shared by all of these subsystems. 2.3.1. Java application server Java application server is a Java framework to run server applications. The Certificate System is designed to run within a Java application server. Currently the only Java application server supported is Tomcat 8. Support for other application servers may be added in the future. More information can be found at http://tomcat.apache.org/ . Each Certificate System instance is a Tomcat server instance. The Tomcat configuration is stored in server.xml . The following link provides more information about Tomcat configuration: https://tomcat.apache.org/tomcat-8.0-doc/config/ . Each Certificate System subsystem (such as CA or KRA) is deployed as a web application in Tomcat. The web application configuration is stored in a web.xml file, which is defined in Java Servlet 3.1 specification. See https://www.jcp.org/en/jsr/detail?id=340 for details. The Certificate System configuration itself is stored in CS.cfg . See Section 2.3.16, "Instance layout" for the actual locations of these files. 2.3.2. Java security manager Java services have the option of having a Security Manager which defines unsafe and safe operations for applications to perform. When the subsystems are installed, they have the Security Manager enabled automatically, meaning each Tomcat instance starts with the Security Manager running. The Security Manager is disabled if the instance is created by running pkispawn and using an override configuration file which specifies the pki_security_manager=false option under its own Tomcat section. The Security Manager can be disabled from an installed instance using the following procedure: First stop the instance: OR if using the Nuxwdog watchdog: Open the /etc/sysconfig/instance_name file, and set SECURITY_MANAGER="false" After saving, restart the instance: OR if using the Nuxwdog watchdog: When you start or restart an instance, a Java security policy is constructed or reconstructed by pkidaemon from the following files: Then, it is saved into /var/lib/pki/instance_name/conf/catalina.policy . 2.3.3. Interfaces This section describes the various interfaces of Red Hat Certificate System. 2.3.3.1. Servlet interface Each subsystem contains interfaces allowing interaction with various portions of the subsystem. All subsystems share a common administrative interface and have an agent interface that allows for agents to perform the tasks assigned to them. A CA Subsystem has an end-entity interface that allows end-entities to enroll in the PKI. An OCSP Responder subsystem has an end-entity interface allowing end-entities and applications to check for current certificate revocation status. Finally, a TPS has an operator interface. While the application server provides the connection entry points, Certificate System completes the interfaces by providing the servlets specific to each interface. The servlets for each subsystem are defined in the corresponding web.xml file. The same file also defines the URL of each servlet and the security requirements to access the servlets. See Section 2.3.1, "Java application server" for more information. 2.3.3.2. Administrative interface The agent interface provides Java servlets to process HTML form submissions coming from the agent entry point. Based on the information given in each form submission, the agent servlets allow agents to perform agent tasks, such as editing and approving requests for certificate approval, certificate renewal, and certificate revocation, approving certificate profiles. The agent interfaces for a KRA subsystem, or a TKS subsystem, or an OCSP Responder are specific to the subsystems. In the non-TMS setup, the agent interface is also used for inter-CIMC boundary communication for the CA-to-KRA trusted connection. This connection is protected by SSL client-authentication and differentiated by separate trusted roles called Trusted Managers . Like the agent role, the Trusted Managers (pseudo-users created for inter-CIMC boundary connection only) are required to be SSL client-authenticated. However, unlike the agent role, they are not offered any agent capability. In the TMS setup, inter-CIMC boundary communication goes from TPS-to-CA, TPS-to-KRA, and TPS-to-TKS. 2.3.3.3. End-entity interface For the CA subsystem, the end-entity interface provides Java servlets to process HTML form submissions coming from the end-entity entry point. Based on the information received from the form submissions, the end-entity servlets allow end-entities to enroll, renew certificates, revoke their own certificates, and pick up issued certificates. The OCSP Responder subsystem's End-Entity interface provides Java servlets to accept and process OCSP requests. The KRA, TKS, and TPS subsystems do not offer any End-Entity services. 2.3.3.4. Operator interface The operator interface is only found in the TPS subsystem. 2.3.4. REST interface Representational state transfer (REST) is a way to use HTTP to define and organize web services which will simplify interoperability with other applications. Red Hat Certificate System provides a REST interface to access various services on the server. The REST services in Red Hat Certificate System are implemented using the RESTEasy framework. RESTEasy is actually running as a servlet in the web application, so the RESTEasy configuration can also be found in the web.xml of the corresponding subsystem. More information about RESTEasy can be found at http://resteasy.jboss.org/ . Each REST service is defined as a separate URL. For example: CA certificate service: http:// <host_name> :_<port>_/ca/rest/certs/ KRA key service: http:// <host_name> :_<port>_/kra/rest/agent/keys/ TKS user service: http:// <host_name> :_<port>_/tks/rest/admin/users/ TPS group service: http:// <host_name> :_<port>_/tps/rest/admin/groups/ Some services can be accessed using plain HTTP connection, but some others may require HTTPS connection for security. The REST operation is specified as HTTP method (for example, GET, PUT, POST, DELETE). For example, to get the CA users the client will send a GET /ca/rest/users request. The REST request and response messages can be sent in XML or JSON format. For example: The REST interface can be accessed using tools such as CLI, Web UI, or generic REST client. Certificate System also provides Java, Python, and JavaScript libraries to access the services programmatically. The REST interface supports two types of authentication methods: user name and password client certificate The authentication method required by each service is defined in /usr/share/pki/ca/conf/auth-method.properties . The REST interface may require certain permissions to access the service. The permissions are defined in the ACL resources in LDAP. The REST interface are mapped to the ACL resources in the /usr/share/pki/<subsystem>/conf/acl.properties . For more information about the REST interface, see http://www.dogtagpki.org/wiki/REST . 2.3.5. JSS Java Security Services (JSS) provides a Java interface for cryptographic operations performed by NSS. JSS and higher levels of the Certificate System architecture are built with Java Native Interface (JNI), which provides access to native system libraries from within the Java Virtual Machine (JVM). This design allows us to use FIPS approved cryptographic providers such as NSS which are distributed as part of the system. JSS supports most of the security standards and encryption technologies supported by NSS. JSS also provides a pure Java interface for ASN.1 types and BER-DER encoding. 2.3.6. Tomcatjss Java-based subsystems in Red Hat Certificate System use a single JAR file called tomcatjss as a bridge between the Tomcat Server HTTP engine and JSS, the Java interface for security operations performed by NSS. Tomcatjss is a Java Secure Socket Extension (JSSE) implementation using Java Security Services (JSS) for Tomcat. Tomcatjss implements the interfaces needed to use TLS and to create TLS sockets. The socket factory, which tomcatjss implements, makes use of the various properties listed below to create a TLS server listening socket and return it to tomcat. Tomcatjss itself, makes use of our java JSS system to ultimately communicate with the native NSS cryptographic services on the machine. Tomcatjss is loaded when the Tomcat server and the Certificate System classes are loaded. The load process is described below: The server is started. Tomcat gets to the point where it needs to create the listening sockets for the Certificate System installation. The server.xml file is processed. Configuration in this file tells the system to use a socket factory implemented by Tomcatjss. For each requested socket, Tomcajss reads and processes the included attributes when it creates the socket. The resulting socket will behave as it has been asked to by those parameters. Once the server is running, we have the required set of listening sockets waiting for incoming connections to the Tomcat-based Certificate System. Note that when the sockets are created at startup, Tomcatjss is the first entity in Certificate System that actually deals with the underlying JSS security services. Once the first listening socket is processed, an instance of JSS is created for use going forward. For further details about the server.xml file, see Section 9.4, "Configuration files for the tomcat engine and web services" . 2.3.7. PKCS #11 Public-Key Cryptography Standard (PKCS) #11 specifies an API used to communicate with devices that hold cryptographic information and perform cryptographic operations. Because it supports PKCS #11, Certificate System is compatible with a wide range of hardware and software devices. At least one PKCS #11 module must be available to any Certificate System subsystem instance. A PKCS #11 module (also called a cryptographic module or cryptographic service provider) manages cryptographic services such as encryption and decryption. PKCS #11 modules are analogous to drivers for cryptographic devices that can be implemented in either hardware or software. Certificate System contains a built-in PKCS #11 module and can support third-party modules. A PKCS #11 module always has one or more slots which can be implemented as physical hardware slots in a physical reader such as smart cards or as conceptual slots in software. Each slot for a PKCS #11 module can in turn contain a token, which is a hardware or software device that actually provides cryptographic services and optionally stores certificates and keys. The Certificate System defines two types of tokens, the internal and the external . The internal token is used for storing certificate trust anchors. The external token is used for storing key pairs and certificates that belong to the Certificate System subsystems. 2.3.7.1. NSS soft token (internal token) Note Certificate System uses an NSS soft token for storing certificate trust anchors. NSS Soft Token is also called an internal token or a software token. The software token consists of two files, which are usually called the certificate database (cert9.db) and key database (key4.db). The files are created during the Certificate System subsystem configuration. These security databases are located in the /var/lib/pki/instance_name/alias directory. Two cryptographic modules provided by the NSS soft token are included in the Certificate System: The default internal PKCS #11 module, which comes with two tokens: The internal crypto services token, which performs all cryptographic operations such as encryption, decryption, and hashing. The internal key storage token ("Certificate DB token"), which handles all communication with the certificate and key database files that store certificates and keys. The FIPS 140 module. This module complies with the FIPS 140 government standard for cryptographic module implementations. The FIPS 140 module includes a single, built-in FIPS 140 certificate database token, which handles both cryptographic operations and communication with the certificate and key database files. Specific instructions on how to import certificates onto the NSS soft token are in Chapter 10, Managing certificate/key crypto token . For more information on the Network Security Services (NSS), see Mozilla Developer web pages of the same name. 2.3.7.2. Hardware security module (HSM, external token) Note Certificate System uses an HSM for storing key pairs and certificates that belong to the Certificate System subsystems. Any PKCS #11 module can be used with the Certificate System. To use an external hardware token with a subsystem, load its PKCS #11 module before the subsystem is configured, and the new token is available to the subsystem. Available PKCS #11 modules are tracked in the pkcs11.txt database for the subsystem. The modutil utility is used to modify this file when there are changes to the system, such as installing a hardware accelerator to use for signing operations. For more information on modutil , see Network Security Services (NSS) at Mozilla Developer webpage. PKCS #11 hardware devices also provide key backup and recovery features for the information stored on hardware tokens. Refer to the PKCS #11 vendor documentation for information on retrieving keys from the tokens. Specific instructions on how to import certificates and to manage the HSM are in Chapter 10, Managing certificate/key crypto token . Supported Hardware Security Modules are located in Section 4.4, "Supported Hardware Security Modules" . 2.3.8. Certificate System serial number management 2.3.8.1. Serial number ranges Certificate request and serial numbers are represented by Java's big integers By default, due to their efficiency, certificate request numbers, certificate serial numbers, and replica IDs are assigned sequentially for CA subsystems. Serial number ranges are specifiable for requests, certificates, and replica IDs: Current serial number management is based on assigning ranges of sequential serial numbers. Instances request new ranges when crossing below a defined threshold. Instances store information about a newly acquired range once it is assigned to the instance. Instances continue using old ranges until all numbers are exhausted from it, and then it moves to the new range. Cloned subsystems synchronize their range assignment through replication conflicts. For new clones: Part of the current range of the master is transferred to a new clone in the process of cloning. New clones may request a new range if the transferred range is below the defined threshold. All ranges are configurable at CA instance installation time by adding a [CA] section to the PKI instance override configuration file, and adding the following name=value pairs under that section as needed. Default values which already exist in /etc/pki/default.cfg are shown in the following example: 2.3.8.2. Random serial number management In addition to sequential serial number management, Red Hat Certificate System provides optional random serial number management. Using random serial numbers is selectable at CA instance installation time by adding a [CA] section to the PKI instance override file and adding the following name=value pair under that section: If selected, certificate request numbers and certificate serial numbers will be selected randomly within the specified ranges. 2.3.9. Security domain A security domain is a registry of PKI services. Services such as CAs register information about themselves in these domains so users of PKI services can find other services by inspecting the registry. The security domain service in RHCS manages both the registration of PKI services for Certificate System subsystems and a set of shared trust policies. See Section 5.3, "Planning security domains" for further details. 2.3.10. Passwords and watchdog (nuxwdog) In the default setup, an RHCS subsystem instance needs to act as a client and authenticate to some other services, such as an LDAP internal database (unless TLS mutual authentication is set up, where a certificate will be used for authentication instead), the NSS token database, or sometimes an HSM with a password. The administrator is prompted to set up this password at the time of installation configuration. This password is then written to the file <instance_dir>/conf/password.conf . At the same time, an identifying string is stored in the main configuration file CS.cfg as part of the parameter cms.passwordlist . The configuration file, CS.cfg , is protected by Red Hat Enterprise Linux, and only accessible by the PKI administrators. No passwords are stored in CS.cfg . During installation, the installer will select and log into either the internal software token or a hardware cryptographic token. The login passphrase to these tokens is also written to password.conf . Configuration at a later time can also place passwords into password.conf . LDAP publishing is one example where the newly configured Directory Manager password for the publishing directory is entered into password.conf . Nuxwdog (watchdog) is a lightweight auxiliary daemon process that is used to start, stop, monitor the status of, and reconfigure server programs. It is most useful when users need to be prompted for passwords to start a server, because it caches these passwords securely in the kernel keyring, so that restarts can be done automatically in the case of a server crash. Note Nuxwdog is the only allowed watchdog service. Once installation is complete, it is possible to remove the password.conf file altogether. On restart, the nuxwdog watchdog program will prompt the administrator for the required passwords, using the parameter cms.passwordlist (and cms.tokenList if an HSM is used) as a list of passwords for which to prompt. The passwords are then cached by nuxwdog in the kernel keyring to allow automated recovery from a server crash. This automated recovery (automatic subsystem restart) happens in case of uncontrolled shutdown (crash). In case of a controlled shutdown by the administrator, administrators are prompted for passwords again. When using the watchdog service, starting and stopping an RHCS instance are done differently. For details, see Section 9.3.2, "Using the Certificate System watchdog service" . For further information, see Section 9.3, "Managing system passwords" . 2.3.11. Internal LDAP database Red Hat Certificate System employs Red Hat Directory Server (RHDS) as its internal database for storing information such as certificates, requests, users, roles, ACLs, as well as other miscellaneous internal information. Certificate System communicates with the internal LDAP database either with a password, or securely by means of SSL authentication. If certificate-based authentication is required between a Certificate System instance and Directory Server, it is important to follow instruction to set up trust between these two entities. Proper pkispawn options will also be needed for installing such Certificate System instance. For details, see Section 7.1.2, "Installing and configuring the DS instances" . 2.3.12. Security-enhanced Linux (SELinux) SELinux is a collection of mandatory access control rules which are enforced across a system to restrict unauthorized access and tampering. SELinux is described in more detail in Using SELinux guide for Red Hat Enterprise Linux 8 . Basically, SELinux identifies objects on a system, which can be files, directories, users, processes, sockets, or any other resource on a Linux host. These objects correspond to the Linux API objects. Each object is then mapped to a security context , which defines the type of object and how it is allowed to function on the Linux server. Objects can be grouped into domains, and then each domain is assigned the proper rules. Each security context has rules which set restrictions on what operations it can perform, what resources it can access, and what permissions it has. SELinux policies for the Certificate System are incorporated into the standard system SELinux policies. These SELinux policies apply to every subsystem and service used by Certificate System. By running Certificate System with SELinux in enforcing mode, the security of the information created and maintained by Certificate System is enhanced. Figure 2.1. CA SELinux port policy The Certificate System SELinux policies define the SELinux configuration for every subsystem instance: Files and directories for each subsystem instance are labeled with a specific SELinux context. The ports for each subsystem instance are labeled with a specific SELinux context. All Certificate System processes are constrained within a subsystem-specific domain. Each domain has specific rules that define what actions that are authorized for the domain. Any access not specified in the SELinux policy is denied to the Certificate System instance. For Certificate System, each subsystem is treated as an SELinux object, and each subsystem has unique rules assigned to it. The defined SELinux policies allow Certificate System objects run with SELinux set in enforcing mode. Every time pkispawn is run to configure a Certificate System subsystem, files and ports associated with that subsystem are labeled with the required SELinux contexts. These contexts are removed when the particular subsystems are removed using pkidestroy . The central definition in an SELinux policy is the pki_tomcat_t domain. Certificate System instances are Tomcat servers, and the pki_tomcat_t domain extends the policies for a standard tomcat_t Tomcat domain. All Certificate System instances on a server share the same domain. When each Certificate System process is started, it initially runs in an unconfined domain ( unconfined_t ) and then transitions into the pki_tomcat_t domain. This process then has certain access permissions, such as write access to log files labeled pki_tomcat_log_t , read and write access to configuration files labeled pki_tomcat_etc_rw_t , or the ability to open and write to http_port_t ports. The SELinux mode can be changed from enforcing to permissive, or even off, though this is not recommended. 2.3.13. Self-tests Red Hat Certificate System provides a Self-Test framework which allows the PKI system integrity to be checked during startup or on demand or both. In the event of a non-critical self test failure, the message will be stored in the log file, while in the event of a critical self test failure, the Certificate System subsystem will record the reasons in the logs and shut down gracefully. The administrator is expected to watch the self-test log during the startup of the subsystem if they wish to see the self-test report during startup. They can also view the log after startup. When a subsystem is shut down due to a self-test failure, it will also be automatically disabled. This is done to ensure that the subsystem does not partially run and produce misleading responses. Once the issue is resolved, the subsystem can be re-enabled by running the following command on the server: For details on how to configure self-tests, see Section 13.3.2, "Configuring self-tests" . 2.3.14. Logs The Certificate System subsystems create log files that record events related to activities, such as administration, communications using any of the protocols the server supports, and various other processes employed by the subsystems. While a subsystem instance is running, it keeps a log of information and error messages on all the components it manages. Additionally, the Apache and Tomcat web servers generate error and access logs. Each subsystem instance maintains its own log files for installation, audit, and other logged functions. Log plugin modules are listeners which are implemented as #JavaTM classes and are registered in the configuration framework. All the log files and rotated log files, except for audit logs, are located in whatever directory was specified in pki_subsystem_log_path when the instance was created with pkispawn . Regular audit logs are located in the log directory with other types of logs, while signed audit logs are written to /var/log/pki/instance_name/subsystem_name/signedAudit . The default location for logs can be changed by modifying the configuration. For details about log configuration during the installation and additional information, see Chapter 13, Configuring logs . For details about log administration after the installation, see Chapter 12 Configuring Subsystem Logs in the Administration Guide (Common Criteria Edition) . 2.3.14.1. Audit log The audit log contains records for selectable events that have been set up as recordable events. You can configure audit logs also to be signed for integrity-checking purposes. Note Audit records should be kept according to the audit retention rules specified in - Section 13.4, "Audit retention" . 2.3.15. Signed Audit Logs The Certificate System maintains audit logs for all events, such as requesting, issuing and revoking certificates and publishing CRLs. These logs are then signed. This allows authorized access or activity to be detected. An outside auditor can then audit the system if required. The assigned auditor user account is the only account which can view the signed audit logs. This user's certificate is used to sign and encrypt the logs. Audit logging is configured to specify the events that are logged. Signed audit logs are written to /var/log/pki/instance_name/subsystem_name/signedAudit. However, you can change the default location for logs by modifying the configuration. For more information, see the "Displaying and verifying signed audit logs" in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 2.3.15.1. Debug logs Debug logs, which are enabled by default, are maintained for all subsystems, with varying degrees and types of information. Debug logs for each subsystem record much more detailed information than system, transaction, and access logs. Debug logs contain very specific information for every operation performed by the subsystem, including plugins and servlets which are run, connection information, and server request and response messages. Services which are recorded to the debug log include authorization requests, processing certificate requests, certificate status checks, and archiving and recovering keys, and access to web services. The debug logs record detailed information about the processes for the subsystem. Each log entry has the following format: The message can be a return message from the subsystem or contain values submitted to the subsystem. For example, the TKS records this message for connecting to an LDAP server: The processor is main , and the message is the message from the server about TKS engine status, and there is no servlet. The INFO is LogLevel , its value could be INFO , WARNING , SEVERE , depending on the log level. The CA, on the other hand, records information about certificate operations as well as subsystem connections: In this case, the processor is the HTTP protocol over the CA's agent port, while it specifies the servlet for handling profiles and contains a message giving a profile parameter (the subsystem owner of a request) and its value (that the KRA initiated the request). Example 2.1. CA Certificate request log messages Likewise, the OCSP shows OCSP request information: 2.3.15.2. Installation logs All subsystems keep an install log. Every time a subsystem is created either through the initial installation or creating additional instances with pkispawn , an installation file with the complete debug output from the installation, including any errors and, if the installation is successful, the URL and PIN to the configuration interface for the instance. The file is created in the /var/log/pki/ directory for the instance with a name in the form pki-subsystem_name-spawn.timestamp.log . Each line in the install log follows a step in the installation process. Example 2.2. CA install log 2.3.15.3. Tomcat error and access logs The CA, KRA, OCSP, TKS, and TPS subsystems use a Tomcat web server instance for their agent and end-entities' interfaces. Error and access logs are created by the Tomcat web server, which are installed with the Certificate System and provide HTTP services. The error log contains the HTTP error messages the server has encountered. The access log lists access activity through the HTTP interface. Logs created by Tomcat: admin. timestamp catalina. timestamp catalina.out host-manager. timestamp localhost. timestamp localhost_access_log. timestamp manager. timestamp These logs are not available or configurable within the Certificate System; they are only configurable within Apache or Tomcat. See the Apache documentation for information about configuring these logs. 2.3.15.4. Self-tests log The self-tests log records information obtained during the self-tests run when the server starts or when the self-tests are manually run. The tests can be viewed by opening this log. This log is not configurable through the Console. This log can only be configured by changing settings in the CS.cfg file. The information about logs in this section does not pertain to this log. See Section 2.3.13, "Self-tests" for more information about self-tests. 2.3.15.5. journalctl logs When starting a Certificate System instance, there is a short period of time before the logging subsystem is set up and enabled. During this time, log contents are written to standard out, which is captured by systemd and exposed via the journalctl utility. To view these logs, run the following command: OR if using the Nuxwdog watchdog: Often it is helpful to watch these logs as the instance is starting (for example, in the event of a self-test failure on startup). To do this, run these commands in a separate console prior to starting the instance: OR if using the Nuxwdog watchdog: 2.3.16. Instance layout Each Certificate System instance depends on a number of files. Some of them are located in instance-specific folders, while some others are located in a common folder which is shared with other server instances. For example, the server configuration files are stored in /etc/pki/instance_name/server.xml , which is instance-specific, but the CA servlets are defined in /usr/share/pki/ca/webapps/ca/WEB-INF/web.xml , which is shared by all server instances on the system. 2.3.16.1. File and directory locations for Certificate System Certificate System servers are Tomcat instances which consist of one or more Certificate System subsystems. Certificate System subsystems are web applications that provide specific type of PKI functions. General, shared subsystem information is contained in non-relocatable, RPM-defined shared libraries, Java archive files, binaries, and templates. These are stored in a fixed location. The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . The directories contain customized configuration files and templates, profiles, certificate databases, and other files for the subsystem. Table 2.2. Tomcat instance information Setting Value Main Directory /var/lib/pki/pki-tomcat Configuration Directory /etc/pki/pki-tomcat Configuration File /etc/pki/pki-tomcat/server.xml /etc/pki/pki-tomcat/password.conf Security Databases /var/lib/pki/pki-tomcat/alias Subsystem Certificates SSL server certificate Subsystem certificate [a] Log Files /var/log/pki/pki-tomcat Web Services Files /usr/share/pki/server/webapps/ROOT - Main page /usr/share/pki/server/webapps/pki/admin - Admin templates /usr/share/pki/server/webapps/pki/js - JavaScript libraries [a] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. Note The /var/lib/pki/instance_name/conf/ directory is a symbolic link to the /etc/pki/instance_name/ directory. 2.3.16.2. CA subsystem information The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . Table 2.3. CA subsystem information Setting Value Main Directory /var/lib/pki/pki-tomcat/ca Configuration Directory /etc/pki/pki-tomcat/ca Configuration File /etc/pki/pki-tomcat/ca/CS.cfg Subsystem Certificates CA signing certificate OCSP signing certificate (for the CA's internal OCSP service) Audit log signing certificate Log Files /var/log/pki/pki-tomcat/ca Install Logs /var/log/pki/pki-ca-spawn.YYYYMMDDhhmmss.log Profile Files /var/lib/pki/pki-tomcat/ca/profiles/ca Email Notification Templates /var/lib/pki/pki-tomcat/ca/emails Web Services Files /usr/share/pki/ca/webapps/ca/agent - Agent services /usr/share/pki/ca/webapps/ca/admin - Admin services /usr/share/pki/ca/webapps/ca/ee - End user services 2.3.16.3. KRA subsystem information The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . Table 2.4. KRA subsystem information Setting Value Main Directory /var/lib/pki/pki-tomcat/kra Configuration Directory /etc/pki/pki-tomcat/kra Configuration File /etc/pki/pki-tomcat/kra/CS.cfg Subsystem Certificates Transport certificate Storage certificate Audit log signing certificate Log Files /var/log/pki/pki-tomcat/kra Install Logs /var/log/pki/pki-kra-spawn.YYYYMMDDhhmmss.log Web Services Files /usr/share/pki/kra/webapps/kra/agent - Agent services /usr/share/pki/kra/webapps/kra/admin - Admin services 2.3.16.4. OCSP subsystem information The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . Table 2.5. OCSP subsystem information Setting Value Main Directory /var/lib/pki/pki-tomcat/ocsp Configuration Directory /etc/pki/pki-tomcat/ocsp Configuration File /etc/pki/pki-tomcat/ocsp/CS.cfg Subsystem Certificates OCSP signing certificate Audit log signing certificate Log Files /var/log/pki/pki-tomcat/ocsp Install Logs /var/log/pki/pki-ocsp-spawn.YYYYMMDDhhmmss.log Web Services Files /usr/share/pki/ocsp/webapps/ocsp/agent - Agent services /usr/share/pki/ocsp/webapps/ocsp/admin - Admin services 2.3.16.5. TKS subsystem information The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . Table 2.6. TKS subsystem information Setting Value Main Directory /var/lib/pki/pki-tomcat/tks Configuration Directory /etc/pki/pki-tomcat/tks Configuration File /etc/pki/pki-tomcat/tks/CS.cfg Subsystem Certificates Audit log signing certificate Log Files /var/log/pki/pki-tomcat/tks Install Logs /var/log/pki/pki-tomcat/pki-tks-spawn.YYYYMMDDhhmmss.log 2.3.16.6. TPS subsystem information The directories are instance specific, tied to the instance name. In these examples, the instance name is pki-tomcat ; the true value is whatever is specified at the time the subsystem is created with pkispawn . Table 2.7. TPS subsystem information Setting Value Main Directory /var/lib/pki/pki-tomcat/tps Configuration Directory /etc/pki/pki-tomcat/tps Configuration File /etc/pki/pki-tomcat/tps/CS.cfg Subsystem Certificates Audit log signing certificate Log Files /var/log/pki/pki-tomcat/tps Install Logs /var/log/pki/pki-tps-spawn.YYYYMMDDhhhmmss.log Web Services Files /usr/share/pki/tps/webapps/tps - TPS services 2.3.16.7. Shared Certificate System subsystem file locations There are some directories used by or common to all Certificate System subsystem instances for general server operations, listed in Table 2.8, "Subsystem file locations" . Table 2.8. Subsystem file locations Directory Location Contents /usr/share/pki Contains common files and templates used to create Certificate System instances. Along with shared files for all subsystems, there are subsystem-specific files in subfolders: pki/ca (CA) pki/kra (KRA) pki/ocsp (OCSP) pki/tks (TKS) pki/tps (TPS) /usr/bin Contains the pkispawn and pkidestroy instance configuration scripts and tools (Java, native, and security) shared by the Certificate System subsystems. /usr/share/java/pki Contains Java archive files shared by local Tomcat web applications and shared by the Certificate System subsystems. 2.4. PKI with Certificate System The Certificate System is comprised of subsystems which each contribute different functions of a public key infrastructure. A PKI environment can be customized to fit individual needs by implementing different features and functions for the subsystems. Note A conventional PKI environment provides the basic framework to manage certificates stored in software databases. This is a non-TMS environment, since it does not manage certificates on smart cards. A TMS environment manages the certificates on smart cards. At a minimum, a non-TMS requires only a CA, but a non-TMS environment can use OCSP responders and KRA instances as well. 2.4.1. Issuing certificates As stated, the Certificate Manager is the heart of the Certificate System. It manages certificates at every stage, from requests through enrollment (issuing), renewal, and revocation. The Certificate System supports enrolling and issuing certificates and processing certificate requests from a variety of end entities, such as web browsers, servers, and virtual private network (VPN) clients. Issued certificates conform to X.509 version 3 standards. For more information, see 5.1 About Enrolling and Renewing Certificates in the Administration Guide (Common Criteria Edition) . 2.4.1.1. The enrollment process An end entity enrolls in the PKI environment by submitting an enrollment request through the end-entity interface. There can be many kinds of enrollment that use different enrollment methods or require different authentication methods. Different interfaces can also accept different types of Certificate Signing Requests (CSR). The Certificate Manager supports different ways to submit CSRs, such as using the graphical interface and command-line tools. 2.4.1.1.1. Enrollment using the user interface For each enrollment through the user interface, there is a separate enrollment page created that is specific to the type of enrollment, type of authentication, and the certificate profiles associated with the type of certificate. The forms associated with enrollment can be customized for both appearance and content. Alternatively, the enrollment process can be customized by creating certificate profiles for each enrollment type. Certificate profiles dynamically-generate forms which are customized by configuring the inputs associated with the certificate profile. Different interfaces can also accept different types of Certificate Signing Requests (CSR). When an end entity enrolls in a PKI by requesting a certificate, the following events can occur, depending on the configuration of the PKI and the subsystems installed: The end entity provides the information in one of the enrollment forms and submits a request. The information gathered from the end entity is customizable in the form depending on the information collected to store in the certificate or to authenticate against the authentication method associated with the form. The form creates a request that is then submitted to the Certificate Manager. The enrollment form triggers the creation of the public and private keys or for dual-key pairs for the request. The end entity provides authentication credentials before submitting the request, depending on the authentication type. This can be LDAP authentication, PIN-based authentication, or certificate-based authentication. The request is submitted either to an agent-approved enrollment process or an automated process. The agent-approved process, which involves no end-entity authentication, sends the request to the request queue in the agent services interface, where an agent must processes the request. An agent can then modify parts of the request, change the status of the request, reject the request, or approve the request. Automatic notification can be set up so an email is sent to an agent any time a request appears in the queue. Also, an automated job can be set to send a list of the contents of the queue to agents on a pre configured schedule. The automated process, which involves end-entity authentication, processes the certificate request as soon as the end entity successfully authenticates. The form collects information about the end entity from an LDAP directory when the form is submitted. For certificate profile-based enrollment, the defaults for the form can be used to collect the user LDAP ID and password. The certificate profile associated with the form determine aspects of the certificate that is issued. Depending on the certificate profile, the request is evaluated to determine if the request meets the constraints set, if the required information is provided, and the contents of the new certificate. The form can also request that the user export the private encryption key. If the KRA subsystem is set up with this CA, the end entity's key is requested, and an archival request is sent to the KRA. This process generally requires no interaction from the end entity. The certificate request is either rejected because it did not meet the certificate profile or authentication requirements, or a certificate is issued. The certificate is delivered to the end entity. In automated enrollment, the certificate is delivered to the user immediately. Since the enrollment is normally through an HTML page, the certificate is returned as a response on another HTML page. In agent-approved enrollment, the certificate can be retrieved by serial number or request Id in the end-entity interface. If the notification feature is set up, the link where the certificate can be obtained is sent to the end user. An automatic notice can be sent to the end entity when the certificate is issued or rejected. The new certificate is stored in the Certificate Manager's internal database. If publishing is set up for the Certificate Manager, the certificate is published to a file or an LDAP directory. The internal OCSP service checks the status of certificates in the internal database when a certificate status request is received. The end-entity interface has a search form for certificates that have been issued and for the CA certificate chain. By default, the user interface supports CSR in the PKCS #10 and Certificate Request Message Format (CRMF). 2.4.1.1.2. Enrollment using the command line This section describes the general workflows when enrolling certificates using the command line. 2.4.1.1.2.1. Enrolling using the pki utility For details, see: The pki-cert(1) man page 2.5 Command-Line Interfaces in the Administration Guide (Common Criteria Edition) . 2.4.1.1.2.2. Enrolling with CMC To enroll a certificate with CMC, proceed as follows: Generate a PKCS #10 or CRMF certificate signing request (CSR) using a utility, such as certutil , PKCS10Client , CRMFPopClient , or pki client-cert-request . For details, see 5.2 Creating certificate signing requests in the Administration Guide (Common Criteria Edition) . Note If key archival is enabled in the Key Recovery Agent (KRA), use the CRMFPopClient utility with the KRA's transport certificate in Privacy Enhanced Mail (PEM) format set in the kra.transport file. Use the CMCRequest utility to convert the CSR into a CMC request. The CMCRequest utility uses a configuration file as input. This file specifies, for example, the input path to the CSR, the CSR's format, the output CMC request file, or nickname of the signing certificate. For further details and examples, see the CMCRequest(1) man page. Use the HttpClient utility to send the CMC request to the CA. HttpClient uses a configuration file with settings, such as the path to the input CMC request file, the path to the output CMC response file, the nickname of the TLS mutual authentication certificate, and the servlet complete with the enrollment profile. If the HttpClient command succeeds, the utility receives a PKCS #7 chain with CMC status controls in a CMC response from the CA. For details about what parameters the utility provides, enter the HttpClient command without any parameters. Use the CMCResponse utility to check the issuance result of the CMC response file generated by HttpClient . If the request is successful, CMCResponse displays the certificate chain in a readable format along with status information. You can also display the PEM of each certificate in chain using the -v option. For further details, see the CMCResponse(1) man page. Import the new certificate into the application. For details, follow the instructions of the application to which you want to import the certificate. Note The certificate retrieved by HttpClient is in CMC response format that contains PKCS #7. If the application supports only Base64-encoded certificates, use the -v option to show the PEM of each certificate in the chain. Additionally, certain applications require a header and footer for certificates in Privacy Enhanced Mail (PEM) format. If these are required, add them manually to the PEM file. 2.4.1.1.2.2.1. CMC Enrollment without POP In situations when Proof Of Possession (POP) is missing, the HttpClient utility receives an EncryptedPOP CMC status, which is displayed by the CMCResponse command. In this case, enter the CMCRequest command again with different parameters in the configuration file. For details, see 5.3.1 CMC Enrollment Process in the Administration Guide (Common Criteria Edition) . 2.4.1.1.2.2.2. Signed CMC requests CMC requests can either be signed by a user or a CA agent: If an agent signs the request, set the authentication method in the profile to CMCAuth . If a user signs the request, set the authentication method in the profile to CMCUserSignedAuth . For details, see 8.3 CMC Authentication Plugins in the Administration Guide (Common Criteria Edition) . 2.4.1.1.2.2.3. Unsigned CMC requests When the CMCUserSignedAuth authentication plugin is configured in the profile, you must use an unsigned CMC request in combination with the Shared Secret authentication mechanism. Note Unsigned CMC requests are also called self-signed CMC requests . For details, see 8.3 CMC Authentication Plugins in the Administration Guide (Common Criteria Edition) , and Section 9.6.3, "Enabling the CMC Shared Secret feature" . 2.4.1.1.2.2.4. The Shared Secret workflow Certificate System provides the Shared Secret authentication mechanism for CMC requests according to RFC 5272 . In order to protect the passphrase, an issuance protection certificate must be provided when using the CMCSharedToken command. The issuance protection certificate works similar to the KRA transport certificate. Note This section assumes that you have enabled the CMC Shared Secret feature, by following Section 9.6.3, "Enabling the CMC Shared Secret feature" . Shared Secret created by the end entity user (preferred) The following describes the workflow, if the user generates the shared secret: The end entity user obtains the issuance protection certificate from the CA administrator. The end entity user uses the CMCSharedToken utility to generate a shared secret token. See 8.4.1 Creating a Shared Secret token in the Administration Guide (Common Criteria Edition). Note The -p option sets the passphrase that is shared between the CA and the user, not the password of the token. The end entity user sends the encrypted shared token generated by the CMCSharedToken utility to the administrator. The administrator adds the shared token into the shrTok attribute in the user's LDAP entry. See 8.4.2 Setting a CMC Shared Secret in the Administration Guide (Common Criteria Edition). The end entity user uses the passphrase to set the witness.sharedSecret parameter in the configuration file passed to the CMCRequest utility. See 5.2.2.2 Using PKCS10Client to create a CSR for SharedSecret-based CMC and 5.2.3.2 Using CRMFPopClient to create a CSR for SharedSecret-based CMC in the Administration Guide (Common Criteria Edition). For further details, see the CMCSharedToken(1) man page. Shared Secret created by the CA administrator The following describes the workflow, if the CA administrator generates the shared secret for a user: The administrator uses the CMCSharedToken utility to generate a shared secret token for the user. See 8.4.1 Creating a Shared Secret token in the Administration Guide (Common Criteria Edition). Note The -p option sets the passphrase that is shared between the CA and the user, not the password of the token. The administrator adds the shared token into the shrTok attribute in the user's LDAP entry. See 8.4.2 Setting a CMC Shared Secret in the Administration Guide (Common Criteria Edition). The administrator shares the passphrase with the user. The end entity user uses the passphrase to set the witness.sharedSecret parameter in the configuration file passed to the CMCRequest utility. See 5.2.2.2 Using PKCS10Client to create a CSR for SharedSecret-based CMC and 5.2.3.2 Using CRMFPopClient to create a CSR for SharedSecret-based CMC in the Administration Guide (Common Criteria Edition). 2.4.1.1.2.2.5. Simple CMC requests Certificate System allows simple CMC requests. However, this process does not support the same level of security requirements as full CMC requests and, therefore, must only be used in a secure environment. When using simple CMC requests, set the following in the HttpClient utility's configuration file: 2.4.1.2. Certificate profiles The Certificate System uses certificate profiles to configure the content of the certificate, the constraints for issuing the certificate, the enrollment method used, and the input and output forms for that enrollment. A single certificate profile is associated with issuing a particular type of certificate. A set of certificate profiles is included for the most common certificate types; the profile settings can be modified. Certificate profiles are configured by an administrator, and then sent to the agent services page for agent approval. Once a certificate profile is approved, it is enabled for use. In case of a UI-enrollment, a dynamically-generated HTML form for the certificate profile is used in the end-entities page for certificate enrollment, which calls on the certificate profile. In case of a command line-based enrollment, the certificate profile is called upon to perform the same processing, such as authentication, authorization, input, output, defaults, and constraints. The server verifies that the defaults and constraints set in the certificate profile are met before acting on the request and uses the certificate profile to determine the content of the issued certificate. The Certificate Manager can issue certificates with any of the following characteristics, depending on the configuration in the profiles and the submitted certificate request: Certificates that are X.509 version 3-compliant Unicode support for the certificate subject name and issuer name Support for empty certificate subject names Support for customized subject name components Support for customized extensions By default, the certificate enrollment profiles are stored in <instance directory>/ca/profiles/ca with names in the format of <profile id>.cfg . LDAP-based profiles are possible with proper pkispawn configuration parameters. 2.4.1.3. Authentication for certificate enrollment Certificate System provides authentication options for certificate enrollment. These include agent-approved enrollment, in which an agent processes the request, and automated enrollment, in which an authentication method is used to authenticate the end entity and then the CA automatically issues a certificate. CMC enrollment is also supported, which automatically processes a request approved by an agent. 2.4.1.4. Cross-pair certificates It is possible to create a trusted relationship between two separate CAs by issuing and storing cross-signed certificates between these two CAs. By using cross-signed certificate pairs, certificates issued outside the organization's PKI can be trusted within the system. 2.4.2. Renewing certificates When certificates reach their expiration date, they can either be allowed to lapse, or they can be renewed. Renewal regenerates a certificate request using the existing key pairs for that certificate, and then resubmits the request to Certificate Manager. The renewed certificate is identical to the original (since it was created from the same profile using the same key material) with one exception - it has a different, later expiration date. Renewal can make managing certificates and relationships between users and servers much smoother, because the renewed certificate functions precisely as the old one. For user certificates, renewal allows encrypted data to be accessed without any loss. 2.4.3. Publishing certificates and CRLs Certificates can be published to files and an LDAP directory, and CRLs to files, an LDAP directory, and an OCSP responder. The publishing framework provides a robust set of tools to publish to all three places and to set rules to define with more detail which types of certificates or CRLs are published where. 2.4.4. Revoking certificates and checking status End entities can request that their own certificates be revoked. When an end entity makes the request, the certificate has to be presented to the CA. If the certificate and the keys are available, the request is processed and sent to the Certificate Manager, and the certificate is revoked. The Certificate Manager marks the certificate as revoked in its database and adds it to any applicable CRLs. An agent can revoke any certificate issued by the Certificate Manager by searching for the certificate in the agent services interface and then marking it revoked. Once a certificate is revoked, it is marked revoked in the database and in the publishing directory, if the Certificate is set up for publishing. If the internal OCSP service has been configured, the service determines the status of certificates by looking them up in the internal database. Automated notifications can be set to send email messages to end entities when their certificates are revoked by enabling and configuring the certificate revoked notification message. 2.4.4.1. Revoking certificates Users can revoke their certificates using: The end-entity pages. For details, see 6.1.7 Certificate Revocation Pages in the Administration Guide (Common Criteria Edition) . The CMCRequest utility on the command line. For details, see 6.2.1 Performing a CMC Revocation in the Administration Guide (Common Criteria Edition) . The pki utility on the command line. For details, see pki-cert(1) man page. 2.4.4.2. Certificate status 2.4.4.2.1. CRLs The Certificate System can create certificate revocation lists (CRLs) from a configurable framework which allows user-defined issuing points so a CRL can be created for each issuing point. Delta CRLs can also be created for any issuing point that is defined. CRLs can be issued for each type of certificate, for a specific subset of a type of certificate, or for certificates generated according to a profile or list of profiles. The extensions used and the frequency and intervals when CRLs are published can all be configured. The Certificate Manager issues X.509-standard CRLs. A CRL can be automatically updated whenever a certificate is revoked or at specified intervals. 2.4.4.2.2. OCSP services The Certificate System CA supports the Online Certificate Status Protocol (OCSP) as defined in PKIX standard RFC 6960 . The OCSP protocol enables OCSP-compliant applications to determine the state of a certificate, including the revocation status, without having to directly check a CRL published by a CA to the validation authority. The validation authority, which is also called an OCSP responder , checks for the application. A CA is set up to issue certificates that include the Authority Information Access extension, which identifies an OCSP responder that can be queried for the status of the certificate. The CA periodically publishes CRLs to an OCSP responder. The OCSP responder maintains the CRL it receives from the CA. An OCSP-compliant client sends requests containing all the information required to identify the certificate to the OCSP responder for verification. The applications determine the location of the OCSP responder from the value of the Authority Information Access extension in the certificate being validated. The OCSP responder determines if the request contains all the information required to process it. If it does not or if it is not enabled for the requested service, a rejection notice is sent. If it does have enough information, it processes the request and sends back a report stating the status of the certificate. 2.4.4.2.2.1. OCSP response signing Every response that the client receives, including a rejection notification, is digitally signed by the responder; the client is expected to verify the signature to ensure that the response came from the responder to which it submitted the request. The key the responder uses to sign the message depends on how the OCSP responder is deployed in a PKI setup. RFC 6960 recommends that the key used to sign the response belong to one of the following: The CA that issued the certificate whose status is being checked. A responder with a public key trusted by the client. Such a responder is called a trusted responder . A responder that holds a specially marked certificate issued to it directly by the CA that revokes the certificates and publishes the CRL. Possession of this certificate by a responder indicates that the CA has authorized the responder to issue OCSP responses for certificates revoked by the CA. Such a responder is called a CA-designated responder or a CA-authorized responder . The end-entities page of a Certificate Manager includes a form for manually requesting a certificate for the OCSP responder. The default enrollment form includes all the attributes that identify the certificate as an OCSP responder certificate. The required certificate extensions, such as OCSPNoCheck and Extended Key Usage, can be added to the certificate when the certificate request is submitted. 2.4.4.2.2.2. OCSP responses The OCSP response that the client receives indicates the current status of the certificate as determined by the OCSP responder. The response could be any of the following: Good or Verified . Specifies a positive response to the status inquiry, meaning the certificate has not been revoked. It does not necessarily mean that the certificate was issued or that it is within the certificate's validity interval. Response extensions may be used to convey additional information on assertions made by the responder regarding the status of the certificate. Revoked . Specifies that the certificate has been revoked, either permanently or temporarily. Based on the status, the client decides whether to validate the certificate. Note The OCSP responder will never return a response of Unknown . The response will always be either Good or Revoked . 2.4.4.2.2.3. OCSP services There are two ways to set up OCSP services: The OCSP built into the Certificate Manager The Online Certificate Status Manager subsystem In addition to the built-in OCSP service, the Certificate Manager can publish CRLs to an OCSP-compliant validation authority. CAs can be configured to publish CRLs to the Certificate System Online Certificate Status Manager. The Online Certificate Status Manager stores each Certificate Manager's CRL in its internal database and uses the appropriate CRL to verify the revocation status of a certificate when queried by an OCSP-compliant client. The Certificate Manager can generate and publish CRLs whenever a certificate is revoked and at specified intervals. Because the purpose of an OCSP responder is to facilitate immediate verification of certificates, the Certificate Manager should publish the CRL to the Online Certificate Status Manager every time a certificate is revoked. Publishing only at intervals means that the OCSP service is checking an outdated CRL. Note If the CRL is large, the Certificate Manager can take a considerable amount of time to publish the CRL. The Online Certificate Status Manager stores each Certificate Manager's CRL in its internal database and uses it as the CRL to verify certificates. The Online Certificate Status Manager can also use the CRL published to an LDAP directory, meaning the Certificate Manager does not have to update the CRLs directly to the Online Certificate Status Manager. 2.4.5. Archiving, recovering, and rotating keys In the world of PKI, private key archival allows parties the possibility to recover the encrypted data in case the private key is lost. Private keys can be lost due to various reasons such as hardware failure, forgotten passwords, lost smartcards, incapacitated password holder, et caetera. Such archival and recovery feature is offered by the Key Recovery Authority (KRA) subsystem of RHCS. Only keys that are used exclusively for encrypting data should be archived; signing keys in particular should never be archived. Having two copies of a signing key makes it impossible to identify with certainty who used the key; a second archived copy could be used to impersonate the digital identity of the original key owner. 2.4.5.1. Archiving keys There are two types of key archival mechanisms provided by KRA: Client-side key generation : With this mechanism, clients are to generate CSRs in CRMF format, and submit the requests to the CA (with proper KRA setup) for enrollment and key archival. See 5.2.3 Creating a CSR Using CRMFPopClient in the Administration Guide (Common Criteria Edition) . Server-side key generation : With this mechanism, the properly equipped certificate enrollment profiles would trigger the PKI keys to be generated on KRA and thereby optionally archived along with newly issued certificates. See 5.2.3 Generating CSRs Using Server-Side Key Generation in the Administration Guide (Common Criteria Edition) . The KRA automatically archives private encryption keys if archiving is configured. The KRA stores private encryption keys in a secure key repository; each key is encrypted and stored as a key record and is given a unique key identifier. The archived copy of the key remains wrapped with the KRA's storage key. It can be decrypted, or unwrapped, only by using the corresponding private key pair of the storage certificate. A combination of one or more key recovery (or KRA) agents' certificates authorizes the KRA to complete the key recovery to retrieve its private storage key and use it to decrypt/recover an archived private key. See Section 12.3.1, "Configuring agent-approved key recovery in the command line" . The KRA indexes stored keys by key number, owner name, and a hash of the public key, allowing for highly efficient searching. The key recovery agents have the privilege to insert, delete, and search for key records. When the key recovery agents search by the key ID, only the key that corresponds to that ID is returned. When the agents search by username, all stored keys belonging to that owner are returned. When the agents search by the public key in a certificate, only the corresponding private key is returned. When a Certificate Manager receives a certificate request that contains the key archival option, it automatically forwards the request to the KRA to archive the encryption key. The private key is encrypted by the transport key, and the KRA receives the encrypted copy and stores the key in its key repository. To archive the key, the KRA uses two special key pairs: A transport key pair and corresponding certificate. A storage key pair. Figure 2.2, "How the Key Archival Process works in client-side key generation" illustrates how the key archival process occurs when an end entity requests a certificate in the case of client-side key generation. Figure 2.2. How the Key Archival Process works in client-side key generation The client generates a CRMF request and submits it through the CA's enrollment portal. The client's private key is wrapped within the CRMF request and can only be unwrapped by the KRA. Detecting that it's a CRMF request with key archival option, CA forwards the request to KRA for private key archival. The KRA decrypts / unwraps the user private key, and after confirming that the private key corresponds to the public key, the KRA encrypts / wraps it again before storing it in its internal LDAP database. Once the private encryption key has been successfully stored, the KRA responds to CA confirming that the key has been successfully archived. The CA sends the request down its Enrollment Profile Framework for certificate information content creation as well as validation. When everything passes, it then issues the certificate and sends it back to the end entity in its response. 2.4.5.2. Recovering keys The KRA supports agent-initiated key recovery . Agent-initiated recovery is when designated recovery agents use the key recovery form on the KRA agent services portal to process and approve key recovery requests. With the approval of a specified number of agents, an organization can recover keys when the key's owner is unavailable or when keys have been lost. Through the KRA agent services portal, key recovery agents can collectively authorize and retrieve private encryption keys and associated certificates into a PKCS #12 package, which can then be imported into the client. In key recovery authorization, one of the key recovery agents informs all required recovery agents about an impending key recovery. All recovery agents access the KRA key recovery portal. One of the agents initiates the key recovery process. The KRA returns a notification to the agent includes a recovery authorization reference number identifying the particular key recovery request that the agent is required to authorize. Each agent uses the reference number and authorizes key recovery separately. KRA supports Asynchronous recovery , meaning that each step of the recovery process - the initial request and each subsequent approval or rejection - is stored in the KRA's internal LDAP database, under the key entry. The status data for the recovery process can be retrieved even if the original browser session is closed or the KRA is shut down. Agents can search for the key to recover, without using a reference number. This asynchronous recovery option is illustrated in Figure 2.3, "Asynchronous recovery" . Figure 2.3. Asynchronous recovery The KRA informs the agent who initiated the key recovery process of the status of the authorizations. When all of the authorizations are entered, the KRA checks the information. If the information presented is correct, it retrieves the requested key and returns it along with the corresponding certificate in the form of a PKCS #12 package to the agent who initiated the key recovery process. Warning The PKCS #12 package contains the encrypted private key. To minimize the risk of key compromise, the recovery agent must use a secure method to deliver the PKCS #12 package and password to the key recipient. The agent should use a good password to encrypt the PKCS #12 package and set up an appropriate delivery mechanism. The key recovery agent scheme configures the KRA to recognize to which group the key recovery agents belong and specifies how many of these recovery agents are required to authorize a key recovery request before the archived key is restored. Important The above information refers to using a web browser, such as Firefox. However, functionality critical to KRA usage is no longer newer versions of browsers. In such cases, it is necessary to use the pki utility to replicate this behavior. For more information, see the pki(1) and pki-key(1) man pages or run CRMFPopClient --help and man CMCRequest. Apart from storing asymmetric keys, KRA can also store symmetric keys or secrets similar to symmetric keys, such as volume encryption secrets, or even passwords and passphrases. The pki utility supports options that enable storing and retrieving these other types of secrets. 2.4.5.3. KRA transport key rotation KRA transport rotation allows for seamless transition between CA and KRA subsystem instances using a current and a new transport key. This allows KRA transport keys to be periodically rotated for enhanced security by allowing both old and new transport keys to operate during the time of the transition; individual subsystem instances take turns being configured while other clones continue to serve with no downtime. In the KRA transport key rotation process, a new transport key pair is generated, a certificate request is submitted, and a new transport certificate is retrieved. The new transport key pair and certificate have to be included in the KRA configuration to provide support for the second transport key. Once KRA supports two transport keys, administrators can start transitioning CAs to the new transport key. KRA support for the old transport key can be removed once all CAs are moved to the new transport key. To configure KRA transport key rotation: Generate a new KRA transport key and certificate Transfer the new transport key and certificate to KRA clones Update the CA configuration with the new KRA transport certificate Update the KRA configuration to use only the new transport key and certificate After this, the rotation of KRA transport certificates is complete, and all the affected CAs and KRAs use the new KRA certificate only. For more information on how to perform the above steps, see the procedures below. Generating the new KRA transport key and certificate Request the KRA transport certificate. Stop the KRA: OR if using the Nuxwdog watchdog: Go to the KRA NSS database directory: Create a subdirectory and save all the NSS database files into it. For example: Create a new request by using the PKCS10Client utility. For example: Alternatively, use the certutil utility. For example: Submit the transport certificate request on the Manual Data Recovery Manager Transport Certificate Enrollment page of the CA End-Entity page. Wait for the agent approval of the submitted request to retrieve the certificate by checking the request status on the End-Entity retrieval page. Approve the KRA transport certificate through the CA Agent Services interface. Retrieve the KRA transport certificate. Go to the KRA NSS database directory: Wait for the agent approval of the submitted request to retrieve the certificate by checking the request status on the End-Entity retrieval page. Once the new KRA transport certificate is available, paste its Base64-encoded value into a text file, for example a file named cert-serial_number.txt . Do not include the header ( -----BEGIN CERTIFICATE----- ) or the footer ( -----END CERTIFICATE----- ). Import the KRA transport certificate. Go to the KRA NSS database directory: Import the transport certificate into the KRA NSS database: Update the KRA transport certificate configuration. Go to the KRA NSS database directory: Verify that the new KRA transport certificate is imported: Open the /var/lib/pki/pki-kra/kra/conf/CS.cfg file and add the following line: Propagating the new transport key and certificate to KRA clones Start the KRA: OR if using the Nuxwdog watchdog: Extract the new transport key and certificate for propagation to clones. Go to the KRA NSS database directory: Stop the KRA: OR if using the Nuxwdog watchdog: Verify that the new KRA transport certificate is present: Export the KRA new transport key and certificate: Verify the exported KRA transport key and certificate: Perform these steps on each KRA clone: Copy the transport.p12 file, including the transport key and certificate, to the KRA clone location. Go to the clone NSS database directory: Stop the KRA clone: OR if using the Nuxwdog watchdog: Check the content of the clone NSS database: Import the new transport key and certificate of the clone: Add the following line to the /var/lib/pki/pki-kra/kra/conf/CS.cfg file on the clone: Start the KRA clone: OR if using the Nuxwdog watchdog: Updating the CA configuration with the new KRA transport certificate Format the new KRA transport certificate for inclusion in the CA. Obtain the cert-serial_number.txt KRA transport certificate file created when retrieving the KRA transport certificate in the procedure. Convert the Base64-encoded certificate included in cert-serial_number.txt to a single-line file: Do the following for the CA and all its clones corresponding to the KRA above: Stop the CA: OR if using the Nuxwdog watchdog: In the /var/lib/pki/pki-ca/ca/conf/CS.cfg file, locate the certificate included in the following line: Replace that certificate with the one contained in cert-one-line-serial_number.txt . Start the CA: OR if using the Nuxwdog watchdog: Note While the CA and all its clones are being updated with the new KRA transport certificate, the CA instances that have completed the transition use the new KRA transport certificate, and the CA instances that have not yet been updated continue to use the old KRA transport certificate. Because the corresponding KRA and its clones have already been updated to use both transport certificates, no downtime occurs. Updating the KRA configuration to use only the new transport key and certificate For the KRA and each of its clones, do the following: Go to the KRA NSS database directory: Stop the KRA: OR if using the Nuxwdog watchdog: Verify that the new KRA transport certificate is imported: Open the /var/lib/pki/pki-kra/kra/conf/CS.cfg file, and look for the nickName value included in the following line: Replace the nickName value with the newNickName value included in the following line: As a result, the CS.cfg file includes this line: Remove the following line from /var/lib/pki/pki-kra/kra/conf/CS.cfg : Start the KRA: OR if using the Nuxwdog watchdog: 2.5. Smart card token management with Certificate System A smart card is a hardware cryptographic device containing cryptographic certificates and keys. It can be employed by the user to participate in operations such as secure website access and secure mail. It can also serve as an authentication device to log in to various operating systems such as Red Hat Enterprise Linux. The management of these cards or tokens throughout their entire lifetime in service is accomplished by the Token Management System (TMS). A TMS environment requires a Certificate Authority (CA), Token Key Service (TKS), and Token Processing System (TPS), with an optional Key Recovery Authority (KRA) for server-side key generation and key archival and recovery. Online Certificate Status Protocol (OCSP) can also be used to work with the CA to serve online certificate status requests. This chapter provides an overview of the TKS and TPS systems, which provide the smart card management functions of Red Hat Certificate System, as well as Enterprise Security Client (ESC), that works with TMS from the user end. Figure 2.4. How the TMS manages smart cards 2.5.1. Token Key Service (TKS) The Token Key Service (TKS) is responsible for managing one or more master keys. It maintains the master keys and is the only entity within the TMS that has access to the key materials. In an operational environment, each valid smart card token contains a set of symmetric keys that are derived from both the master key and the ID that is unique to the card (CUID). Initially, a default (unique only per manufacturer master key) set of symmetric keys is initialized on each smart card by the manufacturer. This default set should be changed at the deployment site by going through a Key Changeover operation to generate the new master key on TKS. As the sole owner to the master key, when given the CUID of a smart card, TKS is capable of deriving the set of symmetric keys residing on that particular smart card, which would then allow TKS to establish a session-based Secure Channel for secure communication between TMS and each individual smart card. Note Because of the sensitivity of the data that the TKS manages, the TKS should be set behind the firewall with restricted access. 2.5.1.1. Master keys and key sets The TKS supports multiple smart card key sets. Each smart card vendor creates different default (developer) static key sets for their smart card token stocks, and the TKS is equipped with the static key set (per manufacturer) to kickstart the format process of a blank token. During the format process of a blank smart card token, a Java applet and the uniquely derived symmetric key set are injected into the token. Each master key (in some cases referred to as keySet ) that the TKS supports is to have a set of entries in the TKS configuration file ( CS.cfg ). Each TPS profile contains a configuration to direct its enrollment to the proper TKS keySet for the matching key derivation process that would essentially be responsible for establishing the Secure Channel secured by a set of session-specific keys between TMS and the smart card token. On TKS, master keys are defined by named keySets for references by TPS. On TPS, depending on the enrollment type (internal or external registration), The keySet is either specified in the TPS profile, or determined by the keySet Mapping Resolver. 2.5.1.2. Key ceremony (shared key transport) A Key Ceremony is a process for transporting highly sensitive keys in a secure way from one location to another. In one scenario, in a highly secure deployment environment, the master key can be generated in a secure vault with no network to the outside. Alternatively, an organization might want to have TKS and TPS instances on different physical machines. In either case, under the assumption that no one single person is to be trusted with the key, Red Hat Certificate System TMS provides a utility called tkstool to manage the secure key transportation. 2.5.1.3. Key update (key changeover) When Global Platform-compliant smart cards are created at the factory, the manufacturer will burn a set of default symmetric keys onto the token. The TKS is initially configured to use these symmetric keys (one KeySet entry per vendor in the TKS configuration). However, since these symmetric keys are not unique to the smart cards from the same stock, and because these are well-known keys, it is strongly encouraged to replace these symmetric keys with a set that is unique per token, not shared by the manufacturer, to restrict the set of entities that can manipulate the token. The changing over of the keys takes place with the assistance of the Token Key Service subsystem. One of the functions of the TKS is to oversee the Master Keys from which the previously discussed smart card token keys are derived. There can be more than one master key residing under the control of the TKS. Important When this key changeover process is done on a token, the token may become unusable in the future since it no longer has the default key set enabled. The key is essentially only as good as long as the TPS and TKS system that provisioned the token is valid. Because of this, it is essential to keep all the master keys, even if any of them are outdated. You can disable the old master keys in TKS for better control, but do not delete them unless disabled tokens are part of your plan. There is support to revert the token keys back to the original key set, which is viable if the token is to be reused again in some sort of a testing scenario. 2.5.1.4. APDUs and secure channels The Red Hat Certificate System Token Management System (TMS) supports the GlobalPlatform smart card specification, in which the Secure Channel implementation is done with the Token Key System (TKS) managing the master key and the Token Processing System (TPS) communicating with the smart card (tokens) with Application Protocol Data Units (APDUs). There are two types of APDUs: Command APDUs , sent by the TPS to smart cards Response APDUs , sent by smart cards to the TPS as response to command APDUs The initiation of the APDU commands may be triggered when clients take action and connect to the Certificate System server for requests. A secure channel begins with an InitializeUpdate APDU sent from TPS to the smart card token, and is fully established with the ExternalAuthenticate APDU. Then, both the token and TMS would have established a set of shared secrets, called session keys, which are used to encrypt and authenticate the communication. This authenticated and encrypted communication channel is called Secure Channel. Because TKS is the only entity that has access to the master key which is capable of deriving the set of unique symmetric on-token smart card keys, the Secure Channel provides the adequately safeguarded communication between TMS and each individual token. Any disconnection of the channel will require reestablishment of new session keys for a new channel. 2.5.2. Token Processing System (TPS) The Token Processing System (TPS) is a registration authority for smart card certificate enrollment. It acts as a conduit between the user-centered Enterprise Security Client (ESC), which interacts with client side smart card tokens, and the Certificate System back end subsystems, such as the Certificate Authority (CA) and the Key Recovery Authority (KRA). In TMS, the TPS is required in order to manage smart cards, as it is the only TMS entity that understands the APDU commands and responses. TPS sends commands to the smart cards to help them generate and store keys and certificates for a specific entity, such as a user or device. Smart card operations go through the TPS and are forwarded to the appropriate subsystem for action, such as the CA to generate certificates or the KRA to generate, archive, or recover keys. 2.5.2.1. Coolkey applet Red Hat Certificate System includes the Coolkey Java applet, written specifically to run on TMS-supported smart card tokens. The Coolkey applet connects to a PKCS#11 module that handles the certificate and key related operations. During a token format operation, this applet is injected onto the smart card token using the Secure Channel protocol, and can be updated per configuration. 2.5.2.2. Token operations The TPS in Red Hat Certificate System is available to provision smart cards on the behalf of end users of the smart cards. The Token Processing System provides support for the following major token operations: Token Format - The format operation is responsible for installing the proper Coolkey applet onto the token. The applet provides a platform where subsequent cryptographic keys and certificates can be later placed. Token Enrollment - The enrollment operation results in a smart card populated with required cryptographic keys and cryptographic certificates. This material allows the user of the smart card to participate in operations such as secure web site access and secure mail. Two types of enrollments are supported, which is configured globally: Internal Registration - Enrollment by TPS profiles determined by the profile Mapping Resolver . External Registration - Enrollment by TPS profiles determined by the entries in the user's LDAP record. Token PIN Reset - The token PIN reset operation allows the user of the token to specify a new PIN that is used to log into the token, making it available for performing cryptographic operations. The following other operations can be considered supplementary or inherent operations to the main ones listed above. They can be triggered per relevant configuration or by the state of the token. Key Generation - Each PKI certificate is comprised of a public/private key pair. In Red Hat Certificate System, the generation of the keys can be done in two ways, depending on the TPS profile configuration: Token Side Key Generation - The PKI key pairs are generated on the smart card token. Generating the key pairs on the token side does not allow for key archival. Server Side Key Generation - The PKI key pairs are generated on the TMS server side. The key pairs are then sent back to the token using Secure Channel. Generating the key pairs on the server side allows for key archival. Certificate Renewal - This operation allows a previously enrolled token to have the certificates currently on the token reissued while reusing the same keys. This is useful in situations where the old certificates are due to expire and you want to create new ones but maintain the original key material. Certificate Revocation - Certificate revocation can be triggered based on TPS profile configuration or based on token state. Normally, only the CA which issued a certificate can revoke it, which could mean that retiring a CA would make it impossible to revoke certain certificates. However, it is possible to route revocation requests for tokens to the retired CA while still routing all other requests such as enrollment to a new, active CA. This mechanism is called Revocation Routing . Token Key Changeover - The key changeover operation, triggered by a format operation, results in the ability to change the internal keys of the token from the default developer key set to a new key set controlled by the deployer of the Token Processing System. This is usually done in any real deployment scenario since the developer key set is better suited to testing situations. Applet Update - During the course of a TMS deployment, the Coolkey smart card applet can be updated or downgraded if required. 2.5.2.3. TPS profiles The Certificate System Token Processing System subsystem facilitates the management of smart card tokens. Tokens are provisioned by the TPS such that they are taken from a blank state to either a Formatted or Enrolled condition. A Formatted token is one that contains the CoolKey applet supported by TPS, while an Enrolled token is personalized (a process called binding ) to an individual with the requisite certificates and cryptographic keys. This fully provisioned token is ready to use for crytptographic operations. The TPS can also manage Profiles . The notion of a token Profile is related to: The steps taken to Format or Enroll a token. The attributes contained within the finished token after the operation has been successfully completed. The following list contains some of the quantities that make up a unique token profile: How does the TPS connect to the user's authentication LDAP database? Will user authentication be required for this token operation? If so, what authentication manager will be used? How does the TPS connect to a Certificate System CA from which it will obtain certificates? How are the private and public keys generated on this token? Are they generated on the token side or on the server side? What key size (in bits) is to be used when generating private and public keys? Which certificate enrollment profile (provisioned by the CA) is to be used to generate the certificates on this token? Note This setting will determine the final structure of the certificates to be written to the token. Different certificates can be created for different uses, based on extensions included in the certificate. For example, one certificate can specialize in data encryption, and another one can be used for signature operations. What version of the Coolkey applet will be required on the token? How many certificates will be placed on this token for an enrollment operation? These above and many others can be configured for each token type or profile. A full list of available configuration options is available in the Red Hat Certificate System Administration Guide . Another question to consider is how a given token being provisioned by a user will be mapped to an individual token profile. There are two types of registration: Internal Registration - In this case, the TPS profile ( tokenType ) is determined by the profile Mapping Resolver . This filter-based resolver can be configured to take any of the data provided by the token into account and determine the target profile. External Registration - When using external registration, the profile (in name only - actual profiles are still defined in the TPS in the same fashion as those used by the internal registration) is specified in each user's LDAP record, which is obtained during authentication. This allows the TPS to obtain key enrollment and recovery information from an external registration Directory Server where user information is stored. This gives you the control to override the enrollment, revocation, and recovery policies that are inherent to the TPS internal registration mechanism. The user LDAP record attribute names relevant to external registration are configurable. External registration can be useful when the concept of a "group certificate" is required. In that case, all users within a group can have a special record configured in their LDAP profiles for downloading a shared certificate and keys. The registration to be used is configured globally per TPS instance. 2.5.2.4. Token database The Token Processing System makes use of the LDAP token database store, which is used to keep a list of active tokens and their respective certificates, and to keep track of the current state of each token. A brand new token is considered Uninitialized , while a fully enrolled token is Enrolled . This data store is constantly updated and consulted by the TPS when processing tokens. 2.5.2.4.1. Token states and transitions The Token Processing System stores states in its internal database in order to determine the current token status as well as actions which can be performed on the token. 2.5.2.4.1.1. Token states The following table lists all possible token states: Table 2.9. Possible token states Name Code Label FORMATTED 0 Formatted (uninitialized) DAMAGED 1 Physically damaged PERM_LOST 2 Permanently lost SUSPENDED 3 Suspended (temporarily lost) ACTIVE 4 Active TERMINATED 6 Terminated UNFORMATTED 7 Unformatted The command line interface displays token states using the Name listed above. The graphical interface uses the Label instead. Note The above table contains no state with code 5 , which previously belonged to a state that was removed. 2.5.2.4.1.2. Token state transitions done using the graphical or command line interface Each token state has a limited amount of states it can transition into. For example, a token can change state from FORMATTED to ACTIVE or DAMAGED , but it can never transition from FORMATTED to UNFORMATTED . Furthermore, the list of states a token can transition into is different depending on whether the transition is triggered manually using a command line or the graphical interface, or automatically using a token operation. The list of allowed manual transitions is stored in the tokendb.allowedTransitions property, and the tps.operations.allowedTransitions property controls allowed transitions triggered by token operations. The default configurations for both manual and token operation-based transitions are stored in the /usr/share/pki/tps/conf/CS.cfg configuration file. 2.5.2.4.1.2.1. Token state transitions using the command line or graphical interface All possible transitions allowed in the command line or graphical interface are described in the TPS configuration file using the tokendb.allowedTransitions property: The property contains a comma-separated list of transitions. Each transition is written in the format of <current code> :_<new code>_ . The codes are described in Table 2.10, "Possible manual token state transitions" . The default configuration is preserved in /usr/share/pki/tps/conf/CS.cfg . The following table describes each possible transition in more detail: Table 2.10. Possible manual token state transitions Transition Current State State Description 0:1 FORMATTED DAMAGED This token has been physically damaged. 0:2 FORMATTED PERM_LOST This token has been permanently lost. 0:3 FORMATTED SUSPENDED This token has been suspended (temporarily lost). 0:6 FORMATTED TERMINATED This token has been terminated. 3:2 SUSPENDED PERM_LOST This suspended token has been permanently lost. 3:6 SUSPENDED TERMINATED This suspended token has been terminated. 4:1 ACTIVE DAMAGED This token has been physically damaged. 4:2 ACTIVE PERM_LOST This token has been permanently lost. 4:3 ACTIVE SUSPENDED This token has been suspended (temporarily lost). 4:6 ACTIVE TERMINATED This token has been terminated. 6:7 TERMINATED UNFORMATTED Reuse this token. The following transitions are generated automatically depending on the token's original state. If a token was originally FORMATTED and then became SUSPENDED , it can only return to the FORMATTED state. If a token was originally ACTIVE and then became SUSPENDED , it can only return to the ACTIVE state. Table 2.11. Token state transitions triggered automatically Transition Current State State Description 3:0 SUSPENDED FORMATTED This suspended (temporarily lost) token has been found. 3:4 SUSPENDED ACTIVE This suspended (temporarily lost) token has been found. 2.5.2.4.1.3. Token state transitions using token operations All possible transitions that can be done using token operations are described in the TPS configuration file using the tokendb.allowedTransitions property: The property contains a comma-separated list of transitions. Each transition is written in the format of <current code> :_<new code>_ . The codes are described in Table 2.10, "Possible manual token state transitions" . The default configuration is preserved in /usr/share/pki/tps/conf/CS.cfg . The following table describes each possible transition in more detail: Table 2.12. Possible token state transitions using token operations Transition Current State State Description 0:0 FORMATTED FORMATTED This allows reformatting a token or upgrading applet/key in a token. 0:4 FORMATTED ACTIVE This allows enrolling a token. 4:4 ACTIVE ACTIVE This allows re-enrolling an active token. May be useful for external registration. 4:0 ACTIVE FORMATTED This allows formatting an active token. 7:0 UNFORMATTED FORMATTED This allows formatting a blank or previously used token. 2.5.2.4.1.4. Token state and transition labels The default labels for token states and transitions are stored in the /usr/share/pki/tps/conf/token-states.properties configuration file. By default, the file has the following contents: 2.5.2.4.1.5. Customizing allowed token state transitions To customize the list of token state transition, edit the following properties in /var/lib/pki/instance_name/tps/conf/CS.cfg : tokendb.allowedTransitions to customize the list of allowed transitions performed using the command line or graphical interface tps.operations.allowedTransitions to customize the list of allowed transitions using token operations Transitions can be removed from the default list if necessary, but new transitions cannot be added unless they were in the default list. The defaults are stored in /usr/share/pki/tps/conf/CS.cfg . 2.5.2.4.1.6. Customizing token state and transition labels To customize token state and transition labels, copy the default /usr/share/pki/tps/conf/token-states.properties into your instance folder ( /var/lib/pki/instance_name/tps/conf/CS.cfg ), and change the labels listed inside as needed. Changes will be effective immediately, the server does not need to be restarted. The TPS user interface may require a reload. To revert to default state and label names, delete the edited token-states.properties file from your instance folder. 2.5.2.4.1.7. Token activity log Certain TPS activities are logged. Possible events in the log file are listed in the table below. Table 2.13. TPS activity log events Activity Description add A token was added. format A token was formatted. enrollment A token was enrolled. recovery A token was recovered. renewal A token was renewed. pin_reset A token PIN was reset. token_status_change A token status was changed using the command line or graphical interface. token_modify A token was modified. delete A token was deleted. cert_revocation A token certificate was revoked. cert_unrevocation A token certificate was unrevoked. 2.5.2.4.2. Token policies In case of internal registration, each token can be governed by a set of token policies. The default policies are: All TPS operations under internal registration are subject to the policies specified in the token's record. If no policies are specified for a token, the TPS uses the default set of policies. 2.5.2.5. Mapping resolver The Mapping Resolver is an extensible mechanism used by the TPS to determine which token profile to assign to a specific token based on configurable criteria. Each mapping resolver instance can be uniquely defined in the configuration, and each operation can point to various defined mapping resolver instance. Note The mapping resolver framework provides a platform for writing custom plugins. However instructions on how to write a plugin is outside the scope of this document. FilterMappingResolver is the only mapping resolver implementation provided with the TPS by default. It allows you to define a set of mappings and a target result for each mapping. Each mapping contains a set of filters, where: If the input filter parameters pass all filters within a mapping, the target value is assigned. If the input parameters fail a filter, that mapping is skipped and the one in order is tried. If a filter has no specified value, it always passes. If a filter does have a specified value, then the input parameters must match exactly. The order in which mappings are defined is important. The first mapping which passes is considered resolved and is returned to the caller. The input filter parameters are information received from the smart card token with or without extensions. They are run against the FilterMappingResolver according to the above rules. The following input filter parameters are supported by FilterMappingResolver : appletMajorVersion - The major version of the Coolkey applet on the token. appletMinorVersion - The minor version of the Coolkey applet on the token. keySet or tokenType keySet - can be set as an extension in the client request. Must match the value in the filter if the extension is specified. The keySet mapping resolver is meant for determining keySet value when using external registration. The Key Set Mapping Resolver is necessary in the external registration environment when multiple key sets are supported (for example, different smart card token vendors). The keySet value is needed for identifying the master key on TKS, which is crucial for establishing Secure Channel. When a user's LDAP record is populated with a set tokenType (TPS profile), it does not know which card will end up doing the enrollment, and therefore keySet cannot be predetermined. The keySetMappingResolver helps solve the issue by allowing the keySet to be resolved before authentication. tokenType - okenType can be set as an extension in the client request. It must match the value in the filter if the extension is specified. tokenType (also referred to as TPS Profile) is determined at this time for the internal registration environment. tokenATR - The token's Answer to Reset (ATR). tokenCUID - "start" and "end" define the range the Card Unique IDs (CUID) of the token must fall in to pass this filter. 2.5.2.6. TPS roles The TPS supports the following roles by default: TPS Administrator - this role is allowed to: Manage TPS tokens View TPS certificates and activities Manage TPS users and groups Change general TPS configuration Manage TPS authenticators and connectors Configure TPS profiles and profile mappings Configure TPS audit logging TPS Agent - this role is allowed to: Configure TPS tokens View TPS certificates and activities Change the status of TPS profiles TPS Operator - this role is allowed to: View TPS tokens, certificates, and activities 2.5.3. TKS/TPS shared secret During TMS installation, a shared symmetric key is established between the Token Key Service and the Token Processing System. The purpose of this key is to wrap and unwrap session keys which are essential to Secure Channels. Note The shared secret key is currently only kept in a software cryptographical database. There are plans to support keeping the key on a Hardware Security Module (HSM) devices in a future release of Red Hat Certificate System. Once this functionality is implemented, you will be instructed to run a Key Ceremony using tkstool to transfer the key to the HSM. 2.5.4. Enterprise Security Client (ESC) The Enterprise Security Client is an HTTP client application, similar to a web browser, that communicates with the TPS and handles smart card tokens from the client side. While an HTTPS connection is established between the ESC and the TPS, an underlying Secure Channel is also established between the token and the TMS within each TLS session. 2.6. Red Hat Certificate System services Certificate System has a number of different features for administrators to use which makes it easier to maintain the individual subsystems and the PKI as a whole. 2.6.1. Notifications When a particular event occurs, such as when a certificate is issued or revoked, then a notification can be sent directly to a specified email address. The notification framework comes with default modules that can be enabled and configured. 2.6.2. Jobs Automated jobs run at defined intervals. 2.6.3. Logging The Certificate System and each subsystem produce extensive system and error logs that record system events so that the systems can be monitored and debugged. All log records are stored in the local file system for quick retrieval. Logs are configurable, so logs can be created for specific types of events and at the required logging level. Certificate System allows logs to be signed digitally before archiving them or distributing them for auditing. This feature enables log files to be checked for tampering after being signed. 2.6.4. Auditing The Certificate System maintains audit logs for all events, such as requesting, issuing and revoking certificates and publishing CRLs. These logs are then signed. This allows authorized access or activity to be detected. An outside auditor can then audit the system if required. The assigned auditor user account is the only account which can view the signed audit logs. This user's certificate is used to sign and encrypt the logs. Audit logging is configured to specify the events that are logged. 2.6.5. Self-tests The Certificate System provides the framework for system self-tests that are automatically run at startup and can be run on demand. A set of configurable self-tests are already included with the Certificate System. See Section 2.3.13, "Self-tests" for more information about self-tests. 2.6.6. Users, authorization, and access controls Certificate System users can be assigned to groups, which are also known as roles, and they then have the privileges of whichever group they are members. A user only has privileges for the instance of the subsystem in which the user is created and the privileges of the group to which the user is a member. Authentication is the means that Certificate System subsystems use to verify the identity of clients, whether they are authenticating to a certificate profile or to one of the services interfaces. There are a number of different ways for a client to perform authentication, including simple user name/password, SSL/TLS mutual authentication, LDAP authentication, NIS authentication, or CMC. Authentication can be performed for any access to the subsystem; for certificate enrollments, for example, the profile defines how the requestor authenticates to the CA. Once the client is identified and authenticated, then the subsystems perform an authorization check to determine what level of access to the subsystem that particular user is allowed. Authorization is tied to group, or role, permissions, rather than directly to individual users. The Certificate System provides an authorization framework for creating groups and assigning access control to those groups. The default access control on preexisting groups can be modified, and access control can be assigned to individual users and IP addresses. Access points for authorization have been created for the major portions of the system, and access control rules can be set for each point. 2.6.6.1. Default administrative roles Note Red Hat Certificate System uses the words roles and groups interchangeably in the context of the permissions given to users. The Certificate System is configured by default with three user types with different access levels to the system: Administrators , who can perform any administrative or configuration task for a subsystem. Agents , who perform PKI management tasks, like approving certificate requests, managing token enrollments, or recovering keys. Auditors , who can view and configure audit logs. Note By default, for bootstrapping purposes, an administrative user processing both Administrator and Agent privileges is created during the Red Hat Certificate System instance creation, when running the pkispawn utility. This bootstrap administrator uses the caadmin user name by default but can be overridden by the pki_admin_uid parameter in the configuration file passed to the pkispawn command. The purpose of the bootstrap administrator is to create the first administrator and agent user. This operation requires the administrator privilege to mange user and groups, and the agent privilege to issue certificates. 2.6.6.2. Built-in subsystem trust roles Additionally, when a security domain is created, the CA subsystem which hosts the domain is automatically granted the special role of Security Domain Administrator , which gives the subsystem the ability to manage the security domain and the subsystem instances within it. Other security domain administrator roles can be created for the different subsystem instances. These special roles should not have actual users as members. | [
"dnf install redhat-pki",
"pkispawn",
"mkdir -p /root/pki",
"[DEFAULT] pki_admin_password=<password> pki_client_pkcs12_password=<password> pki_ds_password=<password>",
"pkispawn -s CA -f /root/pki/ca.cfg",
"pkidestroy Subsystem (CA/KRA/OCSP/TKS/TPS) [CA]: Instance [pki-tomcat]: Begin uninstallation (Yes/No/Quit)? Yes Log file: /var/log/pki/pki-ca-destroy.20150928183547.log Loading deployment configuration from /var/lib/pki/pki-tomcat/ca/registry/ca/deployment.cfg. Uninstalling CA from /var/lib/pki/pki-tomcat. rm '/etc/systemd/system/multi-user.target.wants/pki-tomcatd.target' Uninstallation complete.",
"pkidestroy -s CA -i pki-tomcat Log file: /var/log/pki/pki-ca-destroy.20150928183159.log Loading deployment configuration from /var/lib/pki/pki-tomcat/ca/registry/ca/deployment.cfg. Uninstalling CA from /var/lib/pki/pki-tomcat. rm '/etc/systemd/system/multi-user.target.wants/pki-tomcatd.target' Uninstallation complete.",
"systemctl start <unit-file>@instance_name.service",
"systemctl status <unit-file>@instance_name.service",
"systemctl stop <unit-file>@instance_name.service",
"systemctl restart <unit-file>@instance_name.service",
"pki-tomcatd With watchdog disabled pki-tomcatd-nuxwdog With watchdog enabled",
"systemctl disable pki-tomcatd@instance_name.service",
"systemctl enable pki-tomcatd@instance_name.service",
"pki-server [CLI options] <command> [command parameters]",
"pki-server",
"pki-server ca",
"pki-server ca-audit",
"pki-server --help",
"pki-server ca-audit-event-find --help",
"pki-server subsystem-disable -i instance_id subsystem_id",
"pki-server subsystem-enable -i instance_id subsystem_id",
"pki-server subsystem-disable -i pki-tomcat ocsp",
"pki-server subsystem-find -i instance_id",
"pki-server subsystem-find -i instance_id subsystem_id",
"pkidaemon {start|status} instance-type [instance_name]",
"https://server.example.com:8443/ca/services",
"pkidaemon status instance_name",
"https://server.example.com:8443/ca/ee/ca",
"https://192.0.2.1:8443/ca/services https://[2001:DB8::1111]:8443/ca/services",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:admin_port/subsystem_type",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/kra",
"https://192.0.2.1:8443/ca https://[2001:DB8::1111]:8443/ca",
"pki-server stop instance_name",
"systemctl stop pki-tomcatd-nuxwdog@instance_name.service",
"pki-server start instance_name",
"systemctl start pki-tomcatd-nuxwdog@instance_name.service",
"/usr/share/pki/server/conf/catalina.policy /usr/share/tomcat/conf/catalina.policy /var/lib/pki/USDPKI_INSTANCE_NAME/conf/pki.policy /var/lib/pki/USDPKI_INSTANCE_NAME/conf/custom.policy",
"{ \"id\":\"admin\", \"UserID\":\"admin\", \"FullName\":\"Administrator\", \"Email\":\"[email protected]\", }",
"[CA] pki_serial_number_range_start=1 pki_serial_number_range_end=10000000 pki_request_number_range_start=1 pki_request_number_range_end=10000000 pki_replica_number_range_start=1 pki_replica_number_range_end=100",
"[CA] pki_random_serial_numbers_enable=True",
"pki-server subsystem-enable <subsystem>",
"date time [processor] LogLevel: servlet: message",
"{YY-MM-DD} [main] INFO: TKS engine started",
"{YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requestownerUSD value=KRA-server.example.com-8443",
"{YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.profileapprovedbyUSD value=admin {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.cert_requestUSD value=MIIBozCCAZ8wggEFAgQqTfoHMIHHgAECpQ4wDDEKMAgGA1UEAxMBeKaBnzANBgkqhkiG9w0BAQEFAAOB {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.profileUSD value=true {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.cert_request_typeUSD value=crmf {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requestversionUSD value=1.0.0 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.req_localeUSD value=en {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requestownerUSD value=KRA-server.example.com-8443 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.dbstatusUSD value=NOT_UPDATED {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.subjectUSD value=uid=jsmith, [email protected] {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requeststatusUSD value=begin {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.userUSD value=uid=KRA-server.example.com-8443,ou=People,dc=example,dc=com {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.req_keyUSD value=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDreuEsBWq9WuZ2MaBwtNYxvkLP^M HcN0cusY7gxLzB+XwQ/VsWEoObGldg6WwJPOcBdvLiKKfC605wFdynbEgKs0fChV^M k9HYDhmJ8hX6+PaquiHJSVNhsv5tOshZkCfMBbyxwrKd8yZ5G5I+2gE9PUznxJaM^M HTmlOqm4HwFxzy0RRQIDAQAB {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.authmgrinstnameUSD value=raCertAuth {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.uidUSD value=KRA-server.example.com-8443 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.useridUSD value=KRA-server.example.com-8443 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requestor_nameUSD value= {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.profileidUSD value=caUserCert {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.userdnUSD value=uid=KRA-server.example.com-4747,ou=People,dc=example,dc=com {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.requestidUSD value=20 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.auth_token.authtimeUSD value=1212782378071 {YY-MM-DD}[http-8443;-Processor24]{LogLevel}: ProfileSubmitServlet: key=USDrequest.req_x509infoUSD value=MIICIKADAgECAgEAMA0GCSqGSIb3DQEBBQUAMEAxHjAcBgNVBAoTFVJlZGJ1ZGNv^M bXB1dGVyIERvbWFpbjEeMBwGA1UEAxMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4X^M DTA4MDYwNjE5NTkzOFoXDTA4MTIwMzE5NTkzOFowOzEhMB8GCSqGSIb3DQEJARYS^M anNtaXRoQGV4YW1wbGUuY29tMRYwFAYKCZImiZPyLGQBARMGanNtaXRoMIGfMA0G^M CSqGSIb3DQEBAQUAA4GNADCBiQKBgQDreuEsBWq9WuZ2MaBwtNYxvkLPHcN0cusY^M 7gxLzB+XwQ/VsWEoObGldg6WwJPOcBdvLiKKfC605wFdynbEgKs0fChVk9HYDhmJ^M 8hX6+PaquiHJSVNhsv5tOshZkCfMBbyxwrKd8yZ5G5I+2gE9PUznxJaMHTmlOqm4^M HwFxzy0RRQIDAQABo4HFMIHCMB8GA1UdIwQYMBaAFG8gWeOJIMt+aO8VuQTMzPBU^M 78k8MEoGCCsGAQUFBwEBBD4wPDA6BggrBgEFBQcwAYYuaHR0cDovL3Rlc3Q0LnJl^M ZGJ1ZGNvbXB1dGVyLmxvY2FsOjkwODAvY2Evb2NzcDAOBgNVHQ8BAf8EBAMCBeAw^M HQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMCQGA1UdEQQdMBuBGSRyZXF1^M ZXN0LnJlcXVlc3Rvcl9lbWFpbCQ=",
"{YY-MM-DD}[http-11180-Processor25]: OCSPServlet: OCSP Request: {YY-MM-DD}[http-11180-Processor25]{LogLevel}: OCSPServlet: MEUwQwIBADA+MDwwOjAJBgUrDgMCGgUABBSEWjCarLE6/BiSiENSsV9kHjqB3QQU",
"========================================================================== INSTALLATION SUMMARY ========================================================================== Administrator's username: caadmin Administrator's PKCS #12 file: /root/.dogtag/pki-tomcat/ca_admin_cert.p12 Administrator's certificate nickname: caadmin Administrator's certificate database: /root/.dogtag/pki-tomcat/ca/alias To check the status of the subsystem: systemctl status [email protected] To restart the subsystem: systemctl restart [email protected] The URL for the subsystem is: https://localhost.localdomain:8443/ca PKI instances will be enabled upon system boot ==========================================================================",
"journalctl -u pki-tomcatd@instance_name.service",
"journalctl -u pki-tomcatd-nuxwdog@instance_name.service",
"journalctl -f -u pki-tomcatd@instance_name.service",
"journalctl -f -u pki-tomcatd-nuxwdog@instance_name.service",
"servlet=/ca/ee/ca/profileSubmitCMCSimple?profileId=caECSimpleCMCUserCert",
"pki-server stop pki-kra",
"systemctl stop [email protected]",
"cd /etc/pki/pki-kra/alias",
"mkdir nss_db_backup",
"cp *.db nss_db_backup",
"PKCS10Client -p password -d '.' -o 'req.txt' -n 'CN=KRA Transport 2 Certificate,O=example.com Security Domain'",
"certutil -d . -R -k rsa -g 2048 -s 'CN=KRA Transport 2 Certificate,O=example.com Security Domain' -f password-file -a -o transport-certificate-request-file",
"cd /etc/pki/pki-kra/alias",
"cd /etc/pki/pki-kra/alias",
"certutil -d . -A -n 'transportCert-serial_number cert-pki-kra KRA' -t 'u,u,u' -a -i cert-serial_number.txt",
"cd /etc/pki/pki-kra/alias",
"certutil -d . -L",
"certutil -d . -L -n 'transportCert-serial_number cert-pki-kra KRA'",
"kra.transportUnit.newNickName=transportCert-serial_number cert-pki-kra KRA",
"pki-server start pki-kra",
"systemctl start [email protected]",
"cd /etc/pki/pki-kra/alias",
"pki-server stop pki-kra",
"systemctl stop [email protected]",
"certutil -d . -L",
"certutil -d . -L -n 'transportCert-serial_number cert-pki-kra KRA'",
"pk12util -o transport.p12 -d . -n 'transportCert-serial_number cert-pki-kra KRA'",
"pk12util -l transport.p12",
"cd /etc/pki/pki-kra/alias",
"pki-server stop pki-kra",
"systemctl stop [email protected]",
"certutil -d . -L",
"pk12util -i transport.p12 -d .",
"kra.transportUnit.newNickName=transportCert-serial_number cert-pki-kra KRA",
"pki-server start pki-kra",
"systemctl start [email protected]",
"tr -d '\\n' < cert-serial_number.txt > cert-one-line-serial_number.txt",
"pki-server stop pki-ca",
"systemctl stop [email protected]",
"ca.connector.KRA.transportCert=certificate",
"pki-server start pki-ca",
"systemctl start [email protected]",
"cd /etc/pki/pki-kra/alias",
"pki-server stop pki-kra",
"systemctl stop [email protected]",
"certutil -d . -L",
"certutil -d . -L -n 'transportCert-serial_number cert-pki-kra KRA'",
"kra.transportUnit.nickName=transportCert cert-pki-kra KRA",
"kra.transportUnit.newNickName=transportCert-serial_number cert-pki-kra KRA",
"kra.transportUnit.nickName=transportCert-serial_number cert-pki-kra KRA",
"kra.transportUnit.newNickName=transportCert-serial_number cert-pki-kra KRA",
"pki-server start pki-kra",
"systemctl start [email protected]",
"tokendb.allowedTransitions=0:1,0:2,0:3,0:6,3:2,3:6,4:1,4:2,4:3,4:6,6:7",
"tps.operations.allowedTransitions=0:0,0:4,4:4,4:0,7:0",
"Token states UNFORMATTED = Unformatted FORMATTED = Formatted (uninitialized) ACTIVE = Active SUSPENDED = Suspended (temporarily lost) PERM_LOST = Permanently lost DAMAGED = Physically damaged TEMP_LOST_PERM_LOST = Temporarily lost then permanently lost TERMINATED = Terminated Token state transitions FORMATTED.DAMAGED = This token has been physically damaged. FORMATTED.PERM_LOST = This token has been permanently lost. FORMATTED.SUSPENDED = This token has been suspended (temporarily lost). FORMATTED.TERMINATED = This token has been terminated. SUSPENDED.ACTIVE = This suspended (temporarily lost) token has been found. SUSPENDED.PERM_LOST = This suspended (temporarily lost) token has become permanently lost. SUSPENDED.TERMINATED = This suspended (temporarily lost) token has been terminated. SUSPENDED.FORMATTED = This suspended (temporarily lost) token has been found. ACTIVE.DAMAGED = This token has been physically damaged. ACTIVE.PERM_LOST = This token has been permanently lost. ACTIVE.SUSPENDED = This token has been suspended (temporarily lost). ACTIVE.TERMINATED = This token has been terminated. TERMINATED.UNFORMATTED = Reuse this token.",
"RE_ENROLL=YES;RENEW=NO;FORCE_FORMAT=NO;PIN_RESET=NO;RESET_PIN_RESET_TO_NO=NO;RENEW_KEEP_OLD_ENC_CERTS=YES"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/introduction_to_rhcs |
Chapter 5. Configuring smart card authentication with the web console for centrally managed users | Chapter 5. Configuring smart card authentication with the web console for centrally managed users You can configure smart card authentication in the RHEL web console for users who are centrally managed by: Identity Management Active Directory which is connected in the cross-forest trust with Identity Management Prerequisites The system for which you want to use the smart card authentication must be a member of an Active Directory or Identity Management domain. For details about joining the RHEL 9 system into a domain using the web console, see Joining a RHEL system to an IdM domain using the web console . The certificate used for the smart card authentication must be associated with a particular user in Identity Management or Active Directory. For more details about associating a certificate with the user in Identity Management, see Adding a certificate to a user entry in the IdM Web UI or Adding a certificate to a user entry in the IdM CLI . 5.1. Smart card authentication for centrally managed users A smart card is a physical device, which can provide personal authentication using certificates stored on the card. Personal authentication means that you can use smart cards in the same way as user passwords. You can store user credentials on the smart card in the form of a private key and a certificate. Special software and hardware is used to access them. You insert the smart card into a reader or a USB socket and supply the PIN code for the smart card instead of providing your password. Identity Management (IdM) supports smart card authentication with: User certificates issued by the IdM certificate authority. User certificates issued by the Active Directory Certificate Service (ADCS) certificate authority. Note If you want to start using smart card authentication, see the hardware requirements: Smart Card support in RHEL8+ . 5.2. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 5.3. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 5.4. Enabling smart card authentication for the web console To use smart card authentication in the web console, enable this authentication method in the cockpit.conf file. Additionally, you can disable password authentication in the same file. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Click Terminal . In the /etc/cockpit/cockpit.conf , set the ClientCertAuthentication to yes : Optional: Disable password based authentication in cockpit.conf with: This configuration disables password authentication and you must always use the smart card. Restart the web console to ensure that the cockpit.service accepts the change: 5.5. Logging in to the web console with smart cards You can use smart cards to log in to the web console. Prerequisites A valid certificate stored in your smart card that is associated to a user account created in a Active Directory or Identity Management domain. PIN to unlock the smart card. The smart card has been put into the reader. You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . The browser asks you to add the PIN protecting the certificate stored on the smart card. In the Password Required dialog box, enter PIN and click OK . In the User Identification Request dialog box, select the certificate stored in the smart card. Select Remember this decision . The system does not open this window time. Note This step does not apply to Google Chrome users. Click OK . You are now connected and the web console displays its content. 5.6. Enabling passwordless sudo authentication for smart card users You can configure passwordless authentication to sudo and other services for smart card users in the web console. As an alternative, if you use Red Hat Enterprise Linux Identity Management, you can declare the initial web console certificate authentication as trusted for authenticating to sudo , SSH, or other services. For that purpose, the web console automatically creates an S4U2Proxy Kerberos ticket in the user session. Prerequisites Identity Management installed. Active Directory connected in the cross-forest trust with Identity Management. Smart card set up to log in to the web console. See Configuring smart card authentication with the web console for centrally managed users for more information. Procedure Set up constraint delegation rules to list which hosts the ticket can access. Example 5.1. Setting up constraint delegation rules The web console session runs host host.example.com and should be trusted to access its own host with sudo . Additionally, we are adding second trusted host - remote.example.com . Create the following delegation: Run the following commands to add a list of target machines a particular rule can access: To allow the web console sessions (HTTP/principal) to access that host list, use the following commands: Enable GSS authentication in the corresponding services: For sudo, enable the pam_sss_gss module in the /etc/sssd/sssd.conf file: As root, add an entry for your domain to the /etc/sssd/sssd.conf configuration file. Enable the module in the /etc/pam.d/sudo file on the first line. For SSH, update the GSSAPIAuthentication option in the /etc/ssh/sshd_config file to yes . Warning The delegated S4U ticket is not forwarded to remote SSH hosts when connecting to them from the web console. Authenticating to sudo on a remote host with your ticket will not work. Verification Log in to the web console using a smart card. Click the Limited access button. Authenticate using your smart card. Alternatively: Try to connect to a different host with SSH. 5.7. Limiting user sessions and memory to prevent a DoS attack A certificate authentication is protected by separating and isolating instances of the cockpit-ws web server against attackers who wants to impersonate another user. However, this introduces a potential denial of service (DoS) attack: A remote attacker could create a large number of certificates and send a large number of HTTPS requests to cockpit-ws each using a different certificate. To prevent such DoS attacks, the collective resources of these web server instances are limited. By default, limits for the number of connections and memory usage are set to 200 threads and 75 % (soft) or 90 % (hard) memory limit. The example procedure demonstrates resource protection by limiting the number of connections and memory. Procedure In the terminal, open the system-cockpithttps.slice configuration file: Limit the TasksMax to 100 and CPUQuota to 30% : To apply the changes, restart the system: Now, the new memory and user session lower the risk of DoS attacks on the cockpit-ws web server. 5.8. Additional resources Configuring Identity Management for smart card authentication . Configuring certificates issued by ADCS for smart card authentication in IdM . Configuring and importing local certificates to a smart card . | [
"dnf -y install opensc gnutls-utils",
"systemctl start pcscd",
"systemctl status pcscd",
"pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:",
"pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name",
"pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name",
"pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name",
"pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name",
"pkcs15-init -F",
"[WebService] ClientCertAuthentication = yes",
"[Basic] action = none",
"systemctl restart cockpit",
"ipa servicedelegationtarget-add cockpit-target ipa servicedelegationtarget-add-member cockpit-target --principals=host/[email protected] --principals=host/[email protected]",
"ipa servicedelegationrule-add cockpit-delegation ipa servicedelegationrule-add-member cockpit-delegation --principals=HTTP/[email protected] ipa servicedelegationrule-add-target cockpit-delegation --servicedelegationtargets=cockpit-target",
"[domain/example.com] pam_gssapi_services = sudo, sudo-i",
"auth sufficient pam_sss_gss.so",
"systemctl edit system-cockpithttps.slice",
"change existing value TasksMax= 100 add new restriction CPUQuota= 30%",
"systemctl daemon-reload systemctl stop cockpit"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/configuring-smart-card-authentication-with-the-web-console_managing-smart-card-authentication |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/proc_providing-feedback-on-red-hat-documentation |
Chapter 1. OpenShift Dedicated storage overview | Chapter 1. OpenShift Dedicated storage overview OpenShift Dedicated supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Dedicated cluster. 1.1. Glossary of common terms for OpenShift Dedicated storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. fsGroup The fsGroup defines a file system group ID of a pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Dedicated uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Dedicated uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Dedicated cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Dedicated to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Dedicated supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Dedicated cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. 1.2. Storage types OpenShift Dedicated storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Dedicated uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/storage/storage-overview |
4. Known Issues | 4. Known Issues 4.1. Installer In some circumstances, disks that contain a whole disk format (e.g. a LVM Physical Volume populating a whole disk) are not cleared correctly using the clearpart --initlabel kickstart command. Adding the --all switch - as in clearpart --initlabel --all - ensures disks are cleared correctly. During the installation on POWER systems, the error messages similar to: may be returned to sys.log . The errors do not prevent installation and only occur during initial setup. The filesystem created by the installer will function correctly. When installing on the IBM System z architecture, if the installation is being performed over SSH, avoid resizing the terminal window containing the SSH session. If the terminal window is resized during installation, the installer will exit and installation will terminate. The kernel image provided on the CD/DVD is too large for Open Firmware. Consequently, on the POWER architecture, directly booting the kernel image over a network from the CD/DVD is not possible. Instead, use yaboot to boot from a network. The anaconda partition editing interface includes a button labeled Resize . This feature is intended for users wishing to shrink an existing filesystem and underlying volume to make room for installation of the new system. Users performing manual partitioning cannot use the Resize button to change sizes of partitions as they create them. If you determine a partition needs to be larger than you initially created it, you must delete the first one in the partitioning editor and create a new one with the larger size. Channel IDs(read, write, data) for network devices are required for defining and configuring network devices on s390 systems. However, system-config-kickstart - the graphical user interface for generating a kickstart configuration - cannot define channel IDs for a network device. To work around this issue, manually edit the kickstart configuration that system-config-kickstart generates to include the desired network devices. | [
"attempt to access beyond end of device loop0: rw=0, want=248626, limit=248624"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s04 |
Release notes for Red Hat Developer Hub 1.2 | Release notes for Red Hat Developer Hub 1.2 Red Hat Developer Hub 1.2 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/release_notes_for_red_hat_developer_hub_1.2/index |
probe::socket.close.return | probe::socket.close.return Name probe::socket.close.return - Return from closing a socket Synopsis socket.close.return Values name Name of this probe Context The requester (user process or kernel) Description Fires at the conclusion of closing a socket. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-close-return |
Chapter 1. About OpenShift Virtualization | Chapter 1. About OpenShift Virtualization Learn about OpenShift Virtualization's capabilities and support scope. 1.1. What you can do with OpenShift Virtualization OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads. OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include: Creating and managing Linux and Windows virtual machines (VMs) Running pod and VM workloads alongside each other in a cluster Connecting to virtual machines through a variety of consoles and CLI tools Importing and cloning existing virtual machines Managing network interface controllers and storage disks attached to virtual machines Live migrating virtual machines between nodes An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure. OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features. Important When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. You can use OpenShift Virtualization with the OVN-Kubernetes , OpenShift SDN , or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins . You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles . The Compliance Operator uses OpenSCAP, a NIST-certified tool , to scan and enforce security policies. 1.1.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.13 is supported for use on OpenShift Container Platform 4.13 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 1.2. About storage volumes for virtual machine disks If you use the storage API with known storage providers, volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must select the volume and access mode. For best results, use accessMode: ReadWriteMany and volumeMode: Block . This is important for the following reasons: The ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better in comparison to the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes. Important You cannot live migrate virtual machines that use: A storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Do not set the evictionStrategy field to LiveMigrate for these virtual machines. 1.3. Single-node OpenShift differences You can install OpenShift Virtualization on single-node OpenShift. However, you should be aware that Single-node OpenShift does not support the following features: High availability Pod disruption Live migration Virtual machines or templates that have an eviction strategy configured 1.4. Additional resources Glossary of common terms for OpenShift Container Platform storage About single-node OpenShift Assisted installer Hostpath Provisioner (HPP) OpenShift Container Platform Data Foundation Logical Volume Manager Operator Pod disruption budgets Live migration Eviction strategy Tuning & Scaling Guide Supported limits for OpenShift Virtualization 4.x | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/virtualization/about-virt |
Chapter 52. Updated Drivers | Chapter 52. Updated Drivers Storage Driver Updates The aacraid driver has been updated to version 1.2.1[50792]-custom. The lpfc driver has been updated to version 0:11.2.0.6. The vmw_pvscsi driver has been updated to version 1.0.7.0-k. The megaraid_sas driver has been updated to version 07.701.17.00-rh1. The bfa driver has been updated to version 3.2.25.1. The hpsa driver has been updated to version 3.4.18-0-RH1. The be2iscsi driver has been updated to version 11.2.1.0. The qla2xxx driver has been updated to version 8.07.00.38.07.4-k1. The mpt2sas driver has been updated to version 20.103.00.00. The mpt3sas driver has been updated to version 15.100.00.00. Network Driver Updates The ntb driver has been updated to version 1.0. The igbvf driver has been updated to version 2.4.0-k. The igb driver has been updated to version 5.4.0-k. The ixgbevf driver has been updated to version 3.2.2-k-rh7.4. The i40e driver has been updated to version 1.6.27-k. The fm10k driver has been updated to version 0.21.2-k. The i40evf driver has been updated to version 1.6.27-k. The ixgbe driver has been updated to version 4.4.0-k-rh7.4. The be2net driver has been updated to version 11.1.0.0r. The qede driver has been updated to version 8.10.10.21. The qlge driver has been updated to version 1.00.00.35. The qed driver has been updated to version 8.10.10.21. The bna driver has been updated to version 3.2.25.1r. The bnxt driver has been updated to version 1.7.0. The enic driver has been updated to version 2.3.0.31. The fjes driver has been updated to version 1.2. The hpwdt driver has been updated to version 1.4.02. Graphics Driver and Miscellaneous Driver Updates The vmwgfx driver has been updated to version 2.12.0.0. The hpilo driver has been updated to version 1.5.0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/updated_drivers |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.352_release_notes/openjdk8-temurin-support-policy |
4.5. Configuring Storage-Based Fence Devices with unfencing | 4.5. Configuring Storage-Based Fence Devices with unfencing When creating a SAN/storage fence device (that is, one that uses a non-power based fencing agent), you must set the meta option provides=unfencing when creating the stonith device. This ensures that a fenced node is unfenced before the node is rebooted and the cluster services are started on the node. Setting the provides=unfencing meta option is not necessary when configuring a power-based fence device, since the device itself is providing power to the node in order for it to boot (and attempt to rejoin the cluster). The act of booting in this case implies that unfencing occurred. The following command configures a stonith device named my-scsi-shooter that uses the fence_scsi fence agent, enabling unfencing for the device. | [
"pcs stonith create my-scsi-shooter fence_scsi devices=/dev/sda meta provides=unfencing"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-unfence-HAAR |
Chapter 2. Authentication [operator.openshift.io/v1] | Chapter 2. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 2.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. oauthAPIServer object OAuthAPIServer holds status specific only to oauth-apiserver observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 2.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 2.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 2.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 2.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 2.1.7. .status.oauthAPIServer Description OAuthAPIServer holds status specific only to oauth-apiserver Type object Property Type Description latestAvailableRevision integer LatestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. 2.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/operator.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/operator.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 2.2.1. /apis/operator.openshift.io/v1/authentications HTTP method DELETE Description delete collection of Authentication Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body Authentication schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 2.2.2. /apis/operator.openshift.io/v1/authentications/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the Authentication HTTP method DELETE Description delete an Authentication Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body Authentication schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 2.2.3. /apis/operator.openshift.io/v1/authentications/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the Authentication HTTP method GET Description read status of the specified Authentication Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Authentication schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/authentication-operator-openshift-io-v1 |
Chapter 1. Backup and restore | Chapter 1. Backup and restore 1.1. Control plane backup and restore operations As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later. You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member . When you want to get your cluster running again, restart the cluster gracefully . Note A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs) . You might run into several situations where OpenShift Container Platform does not work as expected, such as: You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure, or network connectivity issues. You have deleted something critical in the cluster by mistake. You have lost the majority of your control plane hosts, leading to etcd quorum loss. You can always recover from a disaster situation by restoring your cluster to its state using the saved etcd snapshots. 1.2. Application backup and restore operations As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP). OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool . OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features . 1.2.1. OADP requirements OADP has the following requirements: You must be logged in as a user with a cluster-admin role. You must have object storage for storing backups, such as one of the following storage types: OpenShift Data Foundation Amazon Web Services Microsoft Azure Google Cloud Platform S3-compatible object storage Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS Note If you do not want to back up PVs by using snapshots, you can use Restic , which is installed by the OADP Operator by default. 1.2.2. Backing up and restoring applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR .You can configure the following backup options: Backup hooks to run commands before or after the backup operation Scheduled backups Restic backups You restore application backups by creating a Restore (CR). See Creating a Restore CR . You can configure restore hooks to run commands in init containers or in the application container during the restore operation. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/backup_and_restore/backup-restore-overview |
Chapter 18. Using the validation framework | Chapter 18. Using the validation framework Red Hat OpenStack Platform (RHOSP) includes a validation framework that you can use to verify the requirements and functionality of the undercloud and overcloud. The framework includes two types of validations: Manual Ansible-based validations, which you execute through the validation command set. Automatic in-flight validations, which execute during the deployment process. You must understand which validations you want to run, and skip validations that are not relevant to your environment. For example, the pre-deployment validation includes a test for TLS-everywhere. If you do not intend to configure your environment for TLS-everywhere, this test fails. Use the --validation option in the validation run command to refine the validation according to your environment. 18.1. Ansible-based validations During the installation of Red Hat OpenStack Platform (RHOSP) director, director also installs a set of playbooks from the openstack-tripleo-validations package. Each playbook contains tests for certain system requirements and a set of groups that define when to run the test: no-op Validations that run a no-op (no operation) task to verify to workflow functions correctly. These validations run on both the undercloud and overcloud. prep Validations that check the hardware configuration of the undercloud node. Run these validation before you run the openstack undercloud install command. openshift-on-openstack Validations that check that the environment meets the requirements to be able to deploy OpenShift on OpenStack. pre-introspection Validations to run before the nodes introspection using Ironic Inspector. pre-deployment Validations to run before the openstack overcloud deploy command. post-deployment Validations to run after the overcloud deployment has finished. pre-upgrade Validations to validate your RHOSP deployment before an upgrade. post-upgrade Validations to validate your RHOSP deployment after an upgrade. 18.2. Changing the validation configuration file The validation configuration file is a .ini file that you can edit to control every aspect of the validation execution and the communication between remote machines. You can change the default configuration values in one of the following ways: Edit the default /etc/validations.cfg file. Make your own copy of the default /etc/validations.cfg file, edit the copy, and provide it through the CLI with the --config argument. If you create your own copy of the configuration file, point the CLI to this file on each execution with --config . By default, the location of the validation configuration file is /etc/validation.cfg . Important Ensure that you correctly edit the configuration file or your validation might fail with errors, for example: undetected validations callbacks written to different locations incorrectly-parsed logs Prerequisites You have a thorough understanding of how to validate your environment. Procedure Optional: Make a copy of the validation configuration file for editing: Copy /etc/validation.cfg to your home directory. Make the required edits to the new configuration file. Run the validation command: Replace <configuration-file> with the file path to the configuration file that you want to use. Note When you run a validation, the Reasons column in the output is limited to 79 characters. To view the validation result in full, view the validation log files. 18.3. Listing validations Run the validation list command to list the different types of validations available. Procedure Source the stackrc file. Run the validation list command: To list all validations, run the command without any options: To list validations in a group, run the command with the --group option: Note For a full list of options, run validation list --help . 18.4. Running validations To run a validation or validation group, use the validation run command. To see a full list of options, use the validation run --help command. Note When you run a validation, the Reasons column in the output is limited to 79 characters. To view the validation result in full, view the validation log files. Procedure Source the stackrc file: Validate a static inventory file called tripleo-ansible-inventory.yaml . Note You can find the inventory file in the ~/tripleo-deploy/<stack> directory for a standalone or undercloud deployment or in the ~/overcloud-deploy/<stack> directory for an overcloud deployment. Enter the validation run command: To run a single validation, enter the command with the --validation option and the name of the validation. For example, to check the memory requirements of each node, enter --validation check-ram : To run multiple specific validations, use the --validation option with a comma-separated list of the validations that you want to run. For more information about viewing the list of available validations, see Listing validations . To run all validations in a group, enter the command with the --group option: To view detailed output from a specific validation, run the validation history get --full command against the UUID of the specific validation from the report: 18.5. Creating a validation You can create a validation with the validation init command. Execution of the command results in a basic template for a new validation. You can edit the new validation role to suit your requirements. Important Red Hat does not support user-created validations. Prerequisites You have a thorough understanding of how to validate your environment. You have access rights to the directory where you run the command. Procedure Create your validation: Replace <my-new-validation> with the name of your new validation. The execution of this command results in the creation of the following directory and sub-directories: Note If you see the error message "The Community Validations are disabled by default, ensure that the enable_community_validations parameter is set to True in the validation configuration file. The default name and location of this file is /etc/validation.cfg . Edit the role to suit your requirements. Additional resources Section 18.2, "Changing the validation configuration file" . 18.6. Viewing validation history Director saves the results of each validation after you run a validation or group of validations. View past validation results with the validation history list command. Prerequisites You have run a validation or group of validations. Procedure Log in to the undercloud host as the stack user. Source the stackrc file: You can view a list of all validations or the most recent validations: View a list of all validations: View history for a specific validation type by using the --validation option: Replace <validation-type> with the type of validation, for example, ntp. View the log for a specific validation UUID: Additional resources Using the validation framework 18.7. Validation framework log format After you run a validation or group of validations, director saves a JSON-formatted log from each validation in the /var/logs/validations directory. You can view the file manually or use the validation history get --full command to display the log for a specific validation UUID. Each validation log file follows a specific format: <UUID>_<Name>_<Time> UUID The Ansible UUID for the validation. Name The Ansible name for the validation. Time The start date and time for when you ran the validation. Each validation log contains three main parts: plays stats validation_output plays The plays section contains information about the tasks that the director performed as part of the validation: play A play is a group of tasks. Each play section contains information about that particular group of tasks, including the start and end times, the duration, the host groups for the play, and the validation ID and path. tasks The individual Ansible tasks that director runs to perform the validation. Each tasks section contains a hosts section, which contains the action that occurred on each individual host and the results from the execution of the actions. The tasks section also contains a task section, which contains the duration of the task. stats The stats section contains a basic summary of the outcome of all tasks on each host, such as the tasks that succeeded and failed. validation_output If any tasks failed or caused a warning message during a validation, the validation_output contains the output of that failure or warning. 18.8. Validation framework log output formats The default behaviour of the validation framework is to save validation logs in JSON format. You can change the output of the logs with the ANSIBLE_STDOUT_CALLBACK environment variable. To change the validation output log format, run a validation and include the --extra-env-vars ANSIBLE_STDOUT_CALLBACK=<callback> option: Replace <callback> with an Ansible output callback. To view a list of the standard Ansible output callbacks, run the following command: The validation framework includes the following additional callbacks: validation_json The framework saves JSON-formatted validation results as a log file in /var/logs/validations . This is the default callback for the validation framework. validation_stdout The framework displays JSON-formatted validation results on screen. http_json The framework sends JSON-formatted validation results to an external logging server. You must also include additional environment variables for this callback: HTTP_JSON_SERVER The URL for the external server. HTTP_JSON_PORT The port for the API entry point of the external server. The default port in 8989. Set these environment variables with additional --extra-env-vars options: Important Before you use the http_json callback, you must add http_json to the callback_whitelist parameter in your ansible.cfg file: 18.9. In-flight validations Red Hat OpenStack Platform (RHOSP) includes in-flight validations in the templates of composable services. In-flight validations verify the operational status of services at key steps of the overcloud deployment process. In-flight validations run automatically as part of the deployment process. Some in-flight validations also use the roles from the openstack-tripleo-validations package. | [
"validation run --config <configuration-file>",
"source ~/stackrc",
"validation list",
"validation list --group prep",
"source ~/stackrc",
"validation run --group pre-introspection -i tripleo-ansible-inventory.yaml",
"validation run --validation check-ram",
"validation run --group prep",
"validation history get --full <UUID>",
"validation init <my-new-validation>",
"/home/stack/community-validations ├── library ├── lookup_plugins ├── playbooks └── roles",
"source ~/stackrc",
"validation history list",
"validation history get --validation <validation-type>",
"validation show run --full 7380fed4-2ea1-44a1-ab71-aab561b44395",
"validation run --extra-env-vars ANSIBLE_STDOUT_CALLBACK=<callback> --validation check-ram",
"ansible-doc -t callback -l",
"validation run --extra-env-vars ANSIBLE_STDOUT_CALLBACK=http_json --extra-env-vars HTTP_JSON_SERVER=http://logserver.example.com --extra-env-vars HTTP_JSON_PORT=8989 --validation check-ram",
"callback_whitelist = http_json"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_using-the-validation-framework |
Chapter 6. Joining RHEL systems to an Active Directory by using RHEL system roles | Chapter 6. Joining RHEL systems to an Active Directory by using RHEL system roles If your organization uses Microsoft Active Directory (AD) to centrally manage users, groups, and other resources, you can join your Red Hat Enterprise Linux (RHEL) host to this AD. For example, AD users can then log into RHEL and you can make services on the RHEL host available for authenticated AD users. By using the ad_integration RHEL system role, you can automate the integration of Red Hat Enterprise Linux system into an Active Directory (AD) domain. Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. 6.1. Joining RHEL to an Active Directory domain by using the ad_integration RHEL system role You can use the ad_integration RHEL system role to automate the process of joining RHEL to an Active Directory (AD) domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed node uses a DNS server that can resolve AD DNS entries. Credentials of an AD account which has permissions to join computers to the domain. Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: usr: administrator pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: "{{ usr }}" ad_integration_password: "{{ pwd }}" ad_integration_realm: "ad.example.com" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: "time_server.ad.example.com" The settings specified in the example playbook include the following: ad_integration_allow_rc4_crypto: <true|false> Configures whether the role activates the AD-SUPPORT crypto policy on the managed node. By default, RHEL does not support the weak RC4 encryption but, if Kerberos in your AD still requires RC4, you can enable this encryption type by setting ad_integration_allow_rc4_crypto: true . Omit this the variable or set it to false if Kerberos uses AES encryption. ad_integration_timesync_source: <time_server> Specifies the NTP server to use for time synchronization. Kerberos requires a synchronized time among AD domain controllers and domain members to prevent replay attacks. If you omit this variable, the ad_integration role does not utilize the timesync RHEL system role to configure time synchronization on the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check if AD users, such as administrator , are available locally on the managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file /usr/share/doc/rhel-system-roles/ad_integration/ directory Ansible vault | [
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"usr: administrator pwd: <password>",
"--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/integrating-rhel-systems-into-ad-directly-with-ansible-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
Chapter 1. Provisioning APIs | Chapter 1. Provisioning APIs 1.1. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 1.2. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 1.3. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API Type object 1.4. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API Type object 1.5. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API Type object 1.6. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API Type object 1.7. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/provisioning_apis/provisioning-apis |
Chapter 11. Disabling Windows container workloads | Chapter 11. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 11.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 11.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes | [
"oc delete --all pods --namespace=openshift-windows-machine-config-operator",
"oc get pods --namespace openshift-windows-machine-config-operator",
"oc delete namespace openshift-windows-machine-config-operator"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/disabling-windows-container-workloads |
9.6. Utilization and Placement Strategy | 9.6. Utilization and Placement Strategy Pacemaker decides where to place a resource according to the resource allocation scores on every node. The resource will be allocated to the node where the resource has the highest score. This allocation score is derived from a combination of factors, including resource constraints, resource-stickiness settings, prior failure history of a resource on each node, and utilization of each node. If the resource allocation scores on all the nodes are equal, by the default placement strategy Pacemaker will choose a node with the least number of allocated resources for balancing the load. If the number of resources on each node is equal, the first eligible node listed in the CIB will be chosen to run the resource. Often, however, different resources use significantly different proportions of a node's capacities (such as memory or I/O). You cannot always balance the load ideally by taking into account only the number of resources allocated to a node. In addition, if resources are placed such that their combined requirements exceed the provided capacity, they may fail to start completely or they may run run with degraded performance. To take these factors into account, Pacemaker allows you to configure the following components: the capacity a particular node provides the capacity a particular resource requires an overall strategy for placement of resources The following sections describe how to configure these components. 9.6.1. Utilization Attributes To configure the capacity that a node provides or a resource requires, you can use utilization attributes for nodes and resources. You do this by setting a utilization variable for a resource and assigning a value to that variable to indicate what the resource requires, and then setting that same utilization variable for a node and assigning a value to that variable to indicate what that node provides. You can name utilization attributes according to your preferences and define as many name and value pairs as your configuration needs. The values of utilization attributes must be integers. As of Red Hat Enterprise Linux 7.3, you can set utilization attributes with the pcs command. The following example configures a utilization attribute of CPU capacity for two nodes, naming the attribute cpu . It also configures a utilization attribute of RAM capacity, naming the attribute memory . In this example: Node 1 is defined as providing a CPU capacity of two and a RAM capacity of 2048 Node 2 is defined as providing a CPU capacity of four and a RAM capacity of 2048 The following example specifies the same utilization attributes that three different resources require. In this example: resource dummy-small requires a CPU capacity of 1 and a RAM capacity of 1024 resource dummy-medium requires a CPU capacity of 2 and a RAM capacity of 2048 resource dummy-large requires a CPU capacity of 1 and a RAM capacity of 3072 A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource's requirements, as defined by the utilization attributes. 9.6.2. Placement Strategy After you have configured the capacities your nodes provide and the capacities your resources require, you need to set the placement-strategy cluster property, otherwise the capacity configurations have no effect. For information on setting cluster properties, see Chapter 12, Pacemaker Cluster Properties . Four values are available for the placement-strategy cluster property: default - Utilization values are not taken into account at all. Resources are allocated according to allocation scores. If scores are equal, resources are evenly distributed across nodes. utilization - Utilization values are taken into account only when deciding whether a node is considered eligible (that is, whether it has sufficient free capacity to satisfy the resource's requirements). Load-balancing is still done based on the number of resources allocated to a node. balanced - Utilization values are taken into account when deciding whether a node is eligible to serve a resource and when load-balancing, so an attempt is made to spread the resources in a way that optimizes resource performance. minimal - Utilization values are taken into account only when deciding whether a node is eligible to serve a resource. For load-balancing, an attempt is made to concentrate the resources on as few nodes as possible, thereby enabling possible power savings on the remaining nodes. The following example command sets the value of placement-strategy to balanced . After running this command, Pacemaker will ensure the load from your resources will be distributed evenly throughout the cluster, without the need for complicated sets of colocation constraints. 9.6.3. Resource Allocation The following subsections summarize how Pacemaker allocates resources. 9.6.3.1. Node Preference Pacemaker determines which node is preferred when allocating resources according to the following strategy. The node with the highest node weight gets consumed first. Node weight is a score maintained by the cluster to represent node health. If multiple nodes have the same node weight: If the placement-strategy cluster property is default or utilization : The node that has the least number of allocated resources gets consumed first. If the numbers of allocated resources are equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is balanced : The node that has the most free capacity gets consumed first. If the free capacities of the nodes are equal, the node that has the least number of allocated resources gets consumed first. If the free capacities of the nodes are equal and the number of allocated resources is equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is minimal , the first eligible node listed in the CIB gets consumed first. 9.6.3.2. Node Capacity Pacemaker determines which node has the most free capacity according to the following strategy. If only one type of utilization attribute has been defined, free capacity is a simple numeric comparison. If multiple types of utilization attributes have been defined, then the node that is numerically highest in the most attribute types has the most free capacity. For example: If NodeA has more free CPUs, and NodeB has more free memory, then their free capacities are equal. If NodeA has more free CPUs, while NodeB has more free memory and storage, then NodeB has more free capacity. 9.6.3.3. Resource Allocation Preference Pacemaker determines which resource is allocated first according to the following strategy. The resource that has the highest priority gets allocated first. For information on setting priority for a resource, see Table 6.3, "Resource Meta Options" . If the priorities of the resources are equal, the resource that has the highest score on the node where it is running gets allocated first, to prevent resource shuffling. If the resource scores on the nodes where the resources are running are equal or the resources are not running, the resource that has the highest score on the preferred node gets allocated first. If the resource scores on the preferred node are equal in this case, the first runnable resource listed in the CIB gets allocated first. 9.6.4. Resource Placement Strategy Guidelines To ensure that Pacemaker's placement strategy for resources works most effectively, you should take the following considerations into account when configuring your system. Make sure that you have sufficient physical capacity. If the physical capacity of your nodes is being used to near maximum under normal conditions, then problems could occur during failover. Even without the utilization feature, you may start to experience timeouts and secondary failures. Build some buffer into the capabilities you configure for the nodes. Advertise slightly more node resources than you physically have, on the assumption the that a Pacemaker resource will not use 100% of the configured amount of CPU, memory, and so forth all the time. This practice is sometimes called overcommit. Specify resource priorities. If the cluster is going to sacrifice services, it should be the ones you care about least. Ensure that resource priorities are properly set so that your most important resources are scheduled first. For information on setting resource priorities, see Table 6.3, "Resource Meta Options" . 9.6.5. The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later) Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent. The NodeUtilization agent can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. You can run the agent as a clone resource to have it automatically populate these parameters on each node. For information on the NodeUtilization resource agent and the resource options for this agent, run the pcs resource describe NodeUtilization command. | [
"pcs node utilization node1 cpu=2 memory=2048 pcs node utilization node2 cpu=4 memory=2048",
"pcs resource utilization dummy-small cpu=1 memory=1024 pcs resource utilization dummy-medium cpu=2 memory=2048 pcs resource utilization dummy-large cpu=3 memory=3072",
"pcs property set placement-strategy=balanced"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-utilization-haar |
Using Qpid JMS | Using Qpid JMS Red Hat build of Apache Qpid JMS 2.4 Developing an AMQ messaging client using Jakarta | [
"cd <project-dir>",
"<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>",
"<dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version>2.4.0.redhat-00005</version> </dependency>",
"unzip amq-clients-2.4.0-maven-repository.zip",
"git clone https://github.com/apache/qpid-jms.git qpid-jms",
"cd qpid-jms git checkout 2.4.0",
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld",
"> java -cp \"target\\classes;target\\dependency\\*\" org.apache.qpid.jms.example.HelloWorld",
"java -cp \"target/classes/:target/dependency/*\" org.apache.qpid.jms.example.HelloWorld Hello world!",
"javax.naming.Context context = new javax.naming.InitialContext();",
"java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory",
"java -Djava.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory",
"Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.qpid.jms.jndi.JmsInitialContextFactory\"); InitialContext context = new InitialContext(env);",
"connectionFactory. <lookup-name> = <connection-uri>",
"connectionFactory.app1 = amqp://example.net:5672?jms.clientID=backend",
"ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");",
"<scheme>://<host>:<port>[?<option>=<value>[&<option>=<value>...]]",
"amqp://example.net:5672?jms.clientID=backend",
"failover:(<connection-uri>[,<connection-uri>...])[?<option>=<value>[&<option>=<value>...]]",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=backend",
"queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>",
"queue.jobs = app1/work-items topic.notifications = app1/updates",
"Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");",
"amqp://localhost:5672?jms.clientID=foo&transport.connectTimeout=30000",
"amqps://myhost.mydomain:5671",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.maxReconnectAttempts=20",
"failover:(amqp://host1:5672?amqp.option=value,amqp://host2:5672?transport.option=value)?jms.clientID=foo",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.nested.amqp.vhost=myhost",
"discovery:(<agent-uri>)?discovery.maxReconnectAttempts=20&discovery.discovered.jms.clientID=foo",
"discovery:(file:///path/to/monitored-file?updateInterval=60000)",
"discovery:(multicast://default?group=default)",
"Configure the InitialContextFactory class to use java.naming.factory.initial = org.apache.qpid.jms.jndi.JmsInitialContextFactory Configure the ConnectionFactory connectionfactory.myFactoryLookup = amqp://localhost:5672 Configure the destination queue.myDestinationLookup = queue",
"package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.DeliveryMode; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageProducer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Sender { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageProducer messageProducer = session.createProducer(destination); 5 TextMessage message = session.createTextMessage(\"Message Text!\"); 6 messageProducer.send(message, DeliveryMode.NON_PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE); 7 connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }",
"package org.jboss.amq.example; import jakarta.jms.Connection; import jakarta.jms.ConnectionFactory; import jakarta.jms.Destination; import jakarta.jms.ExceptionListener; import jakarta.jms.JMSException; import jakarta.jms.Message; import jakarta.jms.MessageConsumer; import jakarta.jms.Session; import jakarta.jms.TextMessage; import javax.naming.Context; import javax.naming.InitialContext; public class Receiver { public static void main(String[] args) throws Exception { try { Context context = new InitialContext(); 1 ConnectionFactory factory = (ConnectionFactory) context.lookup(\"myFactoryLookup\"); Destination destination = (Destination) context.lookup(\"myDestinationLookup\"); 2 Connection connection = factory.createConnection(\"<username>\", \"<password>\"); connection.setExceptionListener(new MyExceptionListener()); connection.start(); 3 Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); 4 MessageConsumer messageConsumer = session.createConsumer(destination); 5 Message message = messageConsumer.receive(5000); 6 if (message == null) { 7 System.out.println(\"A message was not received within given time.\"); } else { System.out.println(\"Received message: \" + ((TextMessage) message).getText()); } connection.close(); 8 } catch (Exception exp) { System.out.println(\"Caught exception, exiting.\"); exp.printStackTrace(System.out); System.exit(1); } } private static class MyExceptionListener implements ExceptionListener { @Override public void onException(JMSException exception) { System.out.println(\"Connection ExceptionListener fired, exiting.\"); exception.printStackTrace(System.out); System.exit(1); } } }",
"<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative</artifactId> <version>2.0.61.redhat-00001</version> <classifier>linux-x86_64-fedora</classifier> </dependency>",
"amqp://myhost:5672?amqp.saslMechanisms=GSSAPI failover:(amqp://myhost:5672?amqp.saslMechanisms=GSSAPI)",
"-Djava.security.auth.login.config=<login-config-file>",
"amqp-jms-client { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };",
"connection.createSession(false, 101);",
"connection.createSession(false, 100);",
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>USD{jaeger-version}</version> </dependency>",
"amqps://example.net? jms.tracing=opentracing",
"import io.jaegertracing.Configuration; import io.opentracing.Tracer; import io.opentracing.util.GlobalTracer; public class Example { public static void main(String[] args) { Tracer tracer = Configuration.fromEnv(\" <service-name> \").getTracer(); GlobalTracer.registerIfAbsent(tracer); // } }",
"export JAEGER_SAMPLER_TYPE=const export JAEGER_SAMPLER_PARAM=1 java -jar example.jar net.example.Example",
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository>",
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name queue --address queue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html-single/using_qpid_jms/%7BProductIntroBookUrl%7D |
Chapter 1. Viewing applications that are connected to OpenShift AI | Chapter 1. Viewing applications that are connected to OpenShift AI You can view the available open source and third-party connected applications from the OpenShift AI dashboard. Prerequisites You have logged in to Red Hat OpenShift AI. Procedure From the OpenShift AI dashboard, select Applications Explore . The Explore page displays applications that are available for use with OpenShift AI. Click a tile for more information about the application or to access the Enable button. Note: The Enable button is visible only if an application does not require an OpenShift Operator installation. Verification You can access the Explore page and click on tiles. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_connected_applications/viewing-connected-applications_connected-apps |
Chapter 78. versions | Chapter 78. versions This chapter describes the commands under the versions command. 78.1. versions show Show available versions of services Usage: Table 78.1. Command arguments Value Summary -h, --help Show this help message and exit --all-interfaces Show values for all interfaces --interface <interface> Show versions for a specific interface. --region-name <region_name> Show versions for a specific region. --service <service> Show versions for a specific service. the argument should be either an exact match to what is in the catalog or a known official value or alias from service-types-authority ( https://service- types.openstack.org/) --status <status> Show versions for a specific status. valid values are: - SUPPORTED - CURRENT - DEPRECATED - EXPERIMENTAL Table 78.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 78.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 78.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 78.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack versions show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-interfaces | --interface <interface>] [--region-name <region_name>] [--service <service>] [--status <status>]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/versions |
24.5. Performance Tuning | 24.5. Performance Tuning Click on the Performance Tuning tab to configure the maximum number of child server processes you want and to configure the Apache HTTP Server options for client connections. The default settings for these options are appropriate for most situations. Altering these settings may affect the overall performance of your Web server. Figure 24.11. Performance Tuning Set Max Number of Connections to the maximum number of simultaneous client requests that the server can handle. For each connection, a child httpd process is created. After this maximum number of processes is reached, no one else can connect to the Web server until a child server process is freed. You can not set this value to higher than 256 without recompiling. This option corresponds to the MaxClients directive. Connection Timeout defines, in seconds, the amount of time that your server waits for receipts and transmissions during communications. Specifically, Connection Timeout defines how long your server waits to receive a GET request, how long it waits to receive TCP packets on a POST or PUT request, and how long it waits between ACKs responding to TCP packets. By default, Connection Timeout is set to 300 seconds, which is appropriate for most situations. This option corresponds to the TimeOut directive. Set the Max requests per connection to the maximum number of requests allowed per persistent connection. The default value is 100, which should be appropriate for most situations. This option corresponds to the MaxRequestsPerChild directive. If you check the Allow unlimited requests per connection option, the MaxKeepAliveRequests directive is set to 0 and unlimited requests are allowed. If you uncheck the Allow Persistent Connections option, the KeepAlive directive is set to false. If you check it, the KeepAlive directive is set to true, and the KeepAliveTimeout directive is set to the number that is selected as the Timeout for Connection value. This directive sets the number of seconds your server waits for a subsequent request, after a request has been served, before it closes the connection. Once a request has been received, the Connection Timeout value applies instead. Setting the Persistent Connections to a high value may cause the server to slow down, depending on how many users are trying to connect to it. The higher the number, the more server processes are waiting for another connection from the last client that connected to it. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/HTTPD_Configuration-Performance_Tuning |
6.4. Configuring Host Names Using nmcli | 6.4. Configuring Host Names Using nmcli The NetworkManager tool nmcli can be used to query and set the static host name in the /etc/hostname file. To query the static host name, issue the following command: To set the static host name to my-server , issue the following command as root : | [
"~]USD nmcli general hostname",
"~]# nmcli general hostname my-server"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_host_names_using_nmcli |
5.2. Global Settings | 5.2. Global Settings The global settings configure parameters that apply to all servers running HAProxy. A typical global section may look like the following: In the above configuration, the administrator has configured the service to log all entries to the local syslog server. By default, this could be /var/log/syslog or some user-designated location. The maxconn parameter specifies the maximum number of concurrent connections for the service. By default, the maximum is 2000. The user and group parameters specifies the user name and group name for which the haproxy process belongs. Finally, the daemon parameter specifies that haproxy run as a background process. | [
"global log 127.0.0.1 local2 maxconn 4000 user haproxy group haproxy daemon"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-haproxy-setup-global |
Chapter 15. URL Handlers | Chapter 15. URL Handlers There are many contexts in Red Hat Fuse where you need to provide a URL to specify the location of a resource (for example, as the argument to a console command). In general, when specifying a URL, you can use any of the schemes supported by Fuse's built-in URL handlers. This appendix describes the syntax for all of the available URL handlers. 15.1. File URL Handler 15.1.1. Syntax A file URL has the syntax, file: PathName , where PathName is the relative or absolute pathname of a file that is available on the Classpath. The provided PathName is parsed by Java's built-in file URL handler . Hence, the PathName syntax is subject to the usual conventions of a Java pathname: in particular, on Windows, each backslash must either be escaped by another backslash or replaced by a forward slash. 15.1.2. Examples For example, consider the pathname, C:\Projects\camel-bundle\target\foo-1.0-SNAPSHOT.jar , on Windows. The following example shows the correct alternatives for the file URL on Windows: The following example shows some incorrect alternatives for the file URL on Windows: 15.2. HTTP URL Handler 15.2.1. Syntax A HTTP URL has the standard syntax, http: Host [: Port ]/[ Path ][# AnchorName ][? Query ] . You can also specify a secure HTTP URL using the https scheme. The provided HTTP URL is parsed by Java's built-in HTTP URL handler, so the HTTP URL behaves in the normal way for a Java application. 15.3. Mvn URL Handler 15.3.1. Overview If you use Maven to build your bundles or if you know that a particular bundle is available from a Maven repository, you can use the Mvn handler scheme to locate the bundle. Note To ensure that the Mvn URL handler can find local and remote Maven artifacts, you might find it necessary to customize the Mvn URL handler configuration. For details, see Section 15.3.5, "Configuring the Mvn URL handler" . 15.3.2. Syntax An Mvn URL has the following syntax: Where repositoryUrl optionally specifies the URL of a Maven repository. The groupId , artifactId , version , packaging , and classifier are the standard Maven coordinates for locating Maven artifacts. 15.3.3. Omitting coordinates When specifying an Mvn URL, only the groupId and the artifactId coordinates are required. The following examples reference a Maven bundle with the groupId , org.fusesource.example , and with the artifactId , bundle-demo : When the version is omitted, as in the first example, it defaults to LATEST , which resolves to the latest version based on the available Maven metadata. In order to specify a classifier value without specifying a packaging or a version value, it is permissible to leave gaps in the Mvn URL. Likewise, if you want to specify a packaging value without a version value. For example: 15.3.4. Specifying a version range When specifying the version value in an Mvn URL, you can specify a version range (using standard Maven version range syntax) in place of a simple version number. You use square brackets- [ and ] -to denote inclusive ranges and parentheses- ( and ) -to denote exclusive ranges. For example, the range, [1.0.4,2.0) , matches any version, v , that satisfies 1.0.4 ⇐ v < 2.0 . You can use this version range in an Mvn URL as follows: 15.3.5. Configuring the Mvn URL handler Before using Mvn URLs for the first time, you might need to customize the Mvn URL handler settings, as follows: Section 15.3.6, "Check the Mvn URL settings" . Section 15.3.7, "Edit the configuration file" . Section 15.3.8, "Customize the location of the local repository" . 15.3.6. Check the Mvn URL settings The Mvn URL handler resolves a reference to a local Maven repository and maintains a list of remote Maven repositories. When resolving an Mvn URL, the handler searches first the local repository and then the remote repositories in order to locate the specified Maven artifiact. If there is a problem with resolving an Mvn URL, the first thing you should do is to check the handler settings to see which local repository and remote repositories it is using to resolve URLs. To check the Mvn URL settings, enter the following commands at the console: The config:edit command switches the focus of the config utility to the properties belonging to the org.ops4j.pax.url.mvn persistent ID. The config:proplist command outputs all of the property settings for the current persistent ID. With the focus on org.ops4j.pax.url.mvn , you should see a listing similar to the following: Where the localRepository setting shows the local repository location currently used by the handler and the repositories setting shows the remote repository list currently used by the handler. 15.3.7. Edit the configuration file To customize the property settings for the Mvn URL handler, edit the following configuration file: The settings in this file enable you to specify explicitly the location of the local Maven repository, remove Maven repositories, Maven proxy server settings, and more. Please see the comments in the configuration file for more details about these settings. 15.3.8. Customize the location of the local repository In particular, if your local Maven repository is in a non-default location, you might find it necessary to configure it explicitly in order to access Maven artifacts that you build locally. In your org.ops4j.pax.url.mvn.cfg configuration file, uncomment the org.ops4j.pax.url.mvn.localRepository property and set it to the location of your local Maven repository. For example: 15.3.9. Reference For more details about the mvn URL syntax, see the original Pax URL Mvn Protocol documentation. 15.4. Wrap URL Handler 15.4.1. Overview If you need to reference a JAR file that is not already packaged as a bundle, you can use the Wrap URL handler to convert it dynamically. The implementation of the Wrap URL handler is based on Peter Krien's open source Bnd utility. 15.4.2. Syntax A Wrap URL has the following syntax: The locationURL can be any URL that locates a JAR (where the referenced JAR is not formatted as a bundle). The optional instructionsURL references a Bnd properties file that specifies how the bundle conversion is performed. The optional instructions is an ampersand, & , delimited list of Bnd properties that specify how the bundle conversion is performed. 15.4.3. Default instructions In most cases, the default Bnd instructions are adequate for wrapping an API JAR file. By default, Wrap adds manifest headers to the JAR's META-INF/Manifest.mf file as shown in Table 15.1, "Default Instructions for Wrapping a JAR" . Table 15.1. Default Instructions for Wrapping a JAR Manifest Header Default Value Import-Package *;resolution:=optional Export-Package All packages from the wrapped JAR. Bundle-SymbolicName The name of the JAR file, where any characters not in the set [a-zA-Z0-9_-] are replaced by underscore, _ . 15.4.4. Examples The following Wrap URL locates version 1.1 of the commons-logging JAR in a Maven repository and converts it to an OSGi bundle using the default Bnd properties: The following Wrap URL uses the Bnd properties from the file, E:\Data\Examples\commons-logging-1.1.bnd : The following Wrap URL specifies the Bundle-SymbolicName property and the Bundle-Version property explicitly: If the preceding URL is used as a command-line argument, it might be necessary to escape the dollar sign, \USD , to prevent it from being processed by the command line, as follows: 15.4.5. Reference For more details about the wrap URL handler, see the following references: The Bnd tool documentation , for more details about Bnd properties and Bnd instruction files. The original Pax URL Wrap Protocol documentation. 15.5. War URL Handler 15.5.1. Overview If you need to deploy a WAR file in an OSGi container, you can automatically add the requisite manifest headers to the WAR file by prefixing the WAR URL with war: , as described here. 15.5.2. Syntax A War URL is specified using either of the following syntaxes: The first syntax, using the war scheme, specifies a WAR file that is converted into a bundle using the default instructions. The warURL can be any URL that locates a WAR file. The second syntax, using the warref scheme, specifies a Bnd properties file, instructionsURL , that contains the conversion instructions (including some instructions that are specific to this handler). In this syntax, the location of the referenced WAR file does not appear explicitly in the URL. The WAR file is specified instead by the (mandatory) WAR-URL property in the properties file. 15.5.3. WAR-specific properties/instructions Some of the properties in the .bnd instructions file are specific to the War URL handler, as follows: WAR-URL (Mandatory) Specifies the location of the War file that is to be converted into a bundle. Web-ContextPath Specifies the piece of the URL path that is used to access this Web application, after it has been deployed inside the Web container. Note Earlier versions of PAX Web used the property, Webapp-Context , which is now deprecated . 15.5.4. Default instructions By default, the War URL handler adds manifest headers to the WAR's META-INF/Manifest.mf file as shown in Table 15.2, "Default Instructions for Wrapping a WAR File" . Table 15.2. Default Instructions for Wrapping a WAR File Manifest Header Default Value Import-Package javax. ,org.xml. ,org.w3c.* Export-Package No packages are exported. Bundle-SymbolicName The name of the WAR file, where any characters not in the set [a-zA-Z0-9_-\.] are replaced by period, . . Web-ContextPath No default value. But the WAR extender will use the value of Bundle-SymbolicName by default. Bundle-ClassPath In addition to any class path entries specified explicitly, the following entries are added automatically: . WEB-INF/classes All of the JARs from the WEB-INF/lib directory. 15.5.5. Examples The following War URL locates version 1.4.7 of the wicket-examples WAR in a Maven repository and converts it to an OSGi bundle using the default instructions: The following Wrap URL specifies the Web-ContextPath explicitly: The following War URL converts the WAR file referenced by the WAR-URL property in the wicket-examples-1.4.7.bnd file and then converts the WAR into an OSGi bundle using the other instructions in the .bnd file: 15.5.6. Reference For more details about the war URL syntax, see the original Pax URL War Protocol documentation. | [
"file:C:/Projects/camel-bundle/target/foo-1.0-SNAPSHOT.jar file:C:\\\\Projects\\\\camel-bundle\\\\target\\\\foo-1.0-SNAPSHOT.jar",
"file:C:\\Projects\\camel-bundle\\target\\foo-1.0-SNAPSHOT.jar // WRONG! file://C:/Projects/camel-bundle/target/foo-1.0-SNAPSHOT.jar // WRONG! file://C:\\\\Projects\\\\camel-bundle\\\\target\\\\foo-1.0-SNAPSHOT.jar // WRONG!",
"mvn:[ repositoryUrl !] groupId / artifactId [/[ version ][/[ packaging ][/[ classifier ]]]]",
"mvn:org.fusesource.example/bundle-demo mvn:org.fusesource.example/bundle-demo/1.1",
"mvn: groupId / artifactId /// classifier mvn: groupId / artifactId / version // classifier mvn: groupId / artifactId // packaging / classifier mvn: groupId / artifactId // packaging",
"mvn:org.fusesource.example/bundle-demo/[1.0.4,2.0)",
"JBossFuse:karaf@root> config:edit org.ops4j.pax.url.mvn JBossFuse:karaf@root> config:proplist",
"org.ops4j.pax.url.mvn.defaultRepositories = file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/system@snapshots@id=karaf.system,file:/home/userid/.m2/repository@snapshots@id=local,file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/local-repo@snapshots@id=karaf.local-repo,file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/system@snapshots@id=child.karaf.system org.ops4j.pax.url.mvn.globalChecksumPolicy = warn org.ops4j.pax.url.mvn.globalUpdatePolicy = daily org.ops4j.pax.url.mvn.localRepository = /path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/data/repository org.ops4j.pax.url.mvn.repositories = http://repo1.maven.org/maven2@id=maven.central.repo, https://maven.repository.redhat.com/ga@id=redhat.ga.repo, https://maven.repository.redhat.com/earlyaccess/all@id=redhat.ea.repo, https://repository.jboss.org/nexus/content/groups/ea@id=fuseearlyaccess org.ops4j.pax.url.mvn.settings = /path/to/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/etc/maven-settings.xml org.ops4j.pax.url.mvn.useFallbackRepositories = false service.pid = org.ops4j.pax.url.mvn",
"InstallDir /etc/org.ops4j.pax.url.mvn.cfg",
"Path to the local maven repository which is used to avoid downloading artifacts when they already exist locally. The value of this property will be extracted from the settings.xml file above, or defaulted to: System.getProperty( \"user.home\" ) + \"/.m2/repository\" # org.ops4j.pax.url.mvn.localRepository=file:E:/Data/.m2/repository",
"wrap: locationURL [, instructionsURL ][USD instructions ]",
"wrap:mvn:commons-logging/commons-logging/1.1",
"wrap:mvn:commons-logging/commons-logging/1.1,file:E:/Data/Examples/commons-logging-1.1.bnd",
"wrap:mvn:commons-logging/commons-logging/1.1USDBundle-SymbolicName=apache-comm-log&Bundle-Version=1.1",
"wrap:mvn:commons-logging/commons-logging/1.1\\USDBundle-SymbolicName=apache-comm-log&Bundle-Version=1.1",
"war: warURL warref: instructionsURL",
"war:mvn:org.apache.wicket/wicket-examples/1.4.7/war",
"war:mvn:org.apache.wicket/wicket-examples/1.4.7/war?Web-ContextPath=wicket",
"warref:file:E:/Data/Examples/wicket-examples-1.4.7.bnd"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/UrlHandlers |
Server Installation and Configuration Guide | Server Installation and Configuration Guide Red Hat Single Sign-On 7.4 For Use with Red Hat Single Sign-On 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/index |
High Availability Add-On Administration | High Availability Add-On Administration Red Hat Enterprise Linux 7 Configuring Red Hat High Availability deployments Steven Levine Red Hat Customer Content Services [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index |
5.78. gdb | 5.78. gdb 5.78.1. RHBA-2012:0930 - gdb bug fix update Updated gdb packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The GNU Debugger (GDB) is the standard debugger for Linux. With GDB, users can debug programs written in C, C++, and other languages by executing them in a controlled fashion and printing out their data. Bug Fixes BZ# 739685 To load a core file, GDB requires the binaries that were used to produce the core file. GDB uses a built-in detection to load the matching binaries automatically. However, you can specify arbitrary binaries manually and override the detection. Previously, loading other binaries that did not match the invoked core file could cause GDB to terminate unexpectedly. With this update, the underlying code has been modified and GDB no longer crashes under these circumstances. BZ# 750341 Previously, GDB could terminate unexpectedly when loading symbols for a C++ program compiled with early GCC compilers due to errors in the cp_scan_for_anonymous_namespaces() function. With this update, an upstream patch that fixes this bug has been adopted and GDB now loads any known executables without crashing. BZ# 781571 If GDB failed to find the associated debuginfo rpm symbol files, GDB displayed the following message suggesting installation of the symbol files using the yum utility: Missing separate debuginfo for the main executable file Try: yum --disablerepo='*' --enablerepo='*-debuginfo' install /usr/lib/debug/.build-id/47/830504b69d8312361b1ed465ba86c9e815b800 However, the suggested "--enablerepo='*-debuginfo'" option failed to work with RHN (Red Hat Network) debug repositories. This update corrects the option in the message to "--enablerepo='*-debug*'" and the suggested command works as expected. BZ# 806920 On PowerPC platforms, DWARF information created by the IBM XL Fortran compiler does not contain the DW_AT_type attribute for DW_TAG_subrange_type; however, DW_TAG_subrange_type in the DWARF information generated by GCC always contains the DW_AT_type attribute. Previously, GDB could interpret arrays from IBM XL Fortran compiler incorrectly as it was missing the DW_AT_type attribute, even though this is in accordance with the DWARF standard. This updated GDB now correctly provides a stub index type if DW_AT_type is missing for any DW_TAG_subrange_type, and processes debug info from both IBM XL Fortran and GCC compilers correctly. All users of gdb are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gdb |
Chapter 20. Job templates | Chapter 20. Job templates A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to run the same job many times. They also encourage the reuse of Ansible playbook content and collaboration between teams. The Templates list view shows job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. You can click the arrow icon to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. From this screen you can launch , edit , and copy a workflow job template. Note Job templates can be used to build a workflow template. Templates that show the Workflow Visualizer icon to them are workflow templates. Clicking the icon allows you to build a workflow graphically. Many parameters in a job template enable you to select Prompt on Launch that you can change at the workflow level, and do not affect the values assigned at the job template level. For instructions, see the Workflow Visualizer section. 20.1. Creating a job template Procedure On the Templates list view, select Add job template from the Add list. Enter the appropriate details in the following fields: Note If a field has the Prompt on launch checkbox selected, launching the job prompts you for the value for that field when launching. Most prompted values override any values set in the job template. Exceptions are noted in the following table. Field Options Prompt on Launch Name Enter a name for the job. N/A Description Enter an arbitrary description as appropriate (optional). N/A Job Type Choose a job type: Run: Start the playbook when launched, running Ansible tasks on the selected hosts. Check: Perform a "dry run" of the playbook and report changes that would be made without actually making them. Tasks that do not support check mode are missed and do not report potential changes. For more information about job types see the Playbooks section of the Ansible documentation. Yes Inventory Choose the inventory to be used with this job template from the inventories available to the logged in user. A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template. Yes. Inventory prompts show up as its own step in a later prompt window. Project Select the project to use with this job template from the projects available to the user that is logged in. N/A SCM branch This field is only present if you chose a project that allows branch override. Specify the overriding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the project is used. For more information, see Job branch overriding . Yes Execution Environment Select the container image to be used to run this job. You must select a project before you can select an execution environment. Yes. Execution environment prompts show up as its own step in a later prompt window. Playbook Choose the playbook to be launched with this job template from the available playbooks. This field automatically populates with the names of the playbooks found in the project base path for the selected project. Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (such as foo.yml) you want to use to run with that playbook. If you enter a filename that is not valid, the template displays an error, or causes the job to fail. N/A Credentials Select the icon to open a separate window. Choose the credential from the available options to use with this job template. Use the drop-down menu list to filter by credential type if the list is extensive. Some credential types are not listed because they do not apply to certain job templates. If selected, when launching a job template that has a default credential and supplying another credential replaces the default credential if it is the same type. The following is an example this message: Job Template default credentials must be replaced with one of the same type. Please select a credential for the following types in order to proceed: Machine. Alternatively, you can add more credentials as you see fit. Credential prompts show up as its own step in a later prompt window. Labels Optionally supply labels that describe this job template, such as dev or test . Use labels to group and filter job templates and completed jobs in the display. Labels are created when they are added to the job template. Labels are associated with a single Organization by using the Project that is provided in the job template. Members of the Organization can create labels on a job template if they have edit permissions (such as the admin role). Once you save the job template, the labels appear in the Job Templates overview in the Expanded view. Select beside a label to remove it. When a label is removed, it is no longer associated with that particular Job or Job Template, but it remains associated with any other jobs that reference it. Jobs inherit labels from the Job Template at the time of launch. If you delete a label from a Job Template, it is also deleted from the Job. If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed. You cannot delete existing labels, selecting only removes the newly added labels, not existing default labels. Variables Pass extra command line variables to the playbook. This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at Defining variables at runtime . Provide key or value pairs by using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. The following is an example value: git_branch: production release_version: 1.5 Yes. If you want to be able to specify extra_vars on a schedule, you must select Prompt on launch for Variables on the job template, or enable a survey on the job template. Those answered survey questions become extra_vars . Forks The number of parallel or simultaneous processes to use while executing the playbook. A value of zero uses the Ansible default setting, which is five parallel processes unless overridden in /etc/ansible/ansible.cfg . Yes Limit A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). As with core Ansible: a:b means "in group a or b" a:b:&c means "in a or b but must be in c" a:!b means "in a, and definitely not in b" For more information, see Patterns: targeting hosts and groups in the Ansible documentation. Yes If not selected, the job template executes against all nodes in the inventory or only the nodes predefined on the Limit field. When running as part of a workflow, the workflow job template limit is used instead. Verbosity Control the level of output Ansible produces as the playbook executes. Choose the verbosity from Normal to various Verbose or Debug settings. This only appears in the details report view. Verbose logging includes the output of all commands. Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances. Verbosity 5 causes automation controller to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up. Yes Job Slicing Specify the number of slices you want this job template to run. Each slice runs the same tasks against a part of the inventory. For more information about job slices, see Job Slicing . Yes Timeout This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value: There is a global timeout defined in the settings which defaults to 0, indicating no timeout. A negative timeout (<0) on a job template is a true "no timeout" on the job. A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default). A positive timeout sets the timeout for that job template. Yes Show Changes Enables you to see the changes made by Ansible tasks. Yes Instance Groups Choose Instance and Container Groups to associate with this job template. If the list is extensive, use the icon to narrow the options. Job template instance groups contribute to the job scheduling criteria, see Job Runtime Behavior and Control where a job runs for rules. A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. Use of a container group requires admin rights. Yes. If selected, you are providing the jobs preferred instance groups in order of preference. If the first group is out of capacity, later groups in the list are considered until one with capacity is available, at which point that is selected to run the job. If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups. The Instance Groups prompt shows up as its own step in a later prompt window. Job Tags Type and select the Create menu to specify which parts of the playbook should be executed. For more information and examples see Tags in the Ansible documentation. Yes Skip Tags Type and select the Create menu to specify certain tasks or parts of the playbook to skip. For more information and examples see Tags in the Ansible documentation. Yes Specify the following Options for launching this template, if necessary: Privilege Escalation : If checked, you enable this playbook to run as an administrator. This is the equivalent of passing the --become option to the ansible-playbook command. Provisioning Callbacks : If checked, you enable a host to call back to automation controller through the REST API and start a job from this job template. For more information, see Provisioning Callbacks . Enable Webhook : If checked, you turn on the ability to interface with a predefined SCM system web service that is used to launch a job template. GitHub and GitLab are the supported SCM systems. If you enable webhooks, other fields display, prompting for additional information: Webhook Service : Select which service to listen for webhooks from. Webhook URL : Automatically populated with the URL for the webhook service to POST requests to. Webhook Key : Generated shared secret to be used by the webhook service to sign payloads sent to automation controller. You must configure this in the settings on the webhook service in order for automation controller to accept webhooks from this service. Webhook Credential : Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. Before you can select it, the credential must exist. See Credential Types to create one. For additional information about setting up webhooks, see Working with Webhooks . Concurrent Jobs : If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. For more information, see Automation controller capacity determination and job impact . Enable Fact Storage : If checked, automation controller stores gathered facts for all hosts in an inventory related to the job running. Prevent Instance Group Fallback : Check this option to allow only the instance groups listed in the Instance Groups field to run the job. If clear, all available instances in the execution pool are used based on the hierarchy described in Control where a job runs . Click Save , when you have completed configuring the details of the job template. Saving the template does not exit the job template page but advances to the Job Template Details tab. After saving the template, you can click Launch to launch the job, or click Edit to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You must first save the template before launching, otherwise, Launch remains disabled. Verification From the navigation panel, select Resources Templates . Verify that the newly created template appears on the Templates list view. 20.2. Adding permissions to templates Use the following steps to add permissions for the team. Procedure From the navigation panel, select Resources Templates . Select a template, and in the Access tab , click Add . Select Users or Teams and click . Select one or more users or teams from the list by clicking the check boxes to the names to add them as members and click . The following example shows two users have been selected to be added: Choose the roles that you want users or teams to have. Ensure that you scroll down for a complete list of roles. Each resource has different options available. Click Save to apply the roles to the selected users or teams and to add them as members. The window to add users and teams closes to display the the updated roles assigned for each user and team: To remove roles for a particular user, click the icon to its resource. This launches a confirmation dialog, asking you to confirm the disassociation. 20.3. Deleting a job template Before deleting a job template, ensure that it is not used in a workflow job template. Procedure Delete a job template by using one of these methods: Select the checkbox to one or more job templates and click Delete . Click the desired job template and click Delete , on the Details page. Note If deleting items that are used by other work items, a message opens listing the items that are affected by the deletion and prompts you to confirm the deletion. Some screens contain items that are invalid or previously deleted, and will fail to run. The following is an example of that message: 20.4. Work with notifications From the navigation panel, select Administration Notifications . This enables you to review any notification integrations you have set up and their statuses, if they have run. Use the toggles to enable or disable the notifications to use with your particular template. For more information, see Enable and Disable Notifications . If no notifications have been set up, click Add to create a new notification. For more information on configuring various notification types and extended messaging, see Notification Types . 20.5. View completed jobs The Jobs tab provides the list of job templates that have run. Click the expand icon to each job to view the following details: Status ID and name Type of job Time started and completed Who started the job and which template, inventory, project, and credential were used. You can filter the list of completed jobs using any of these criteria. Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run: 20.6. Scheduling job templates Access the schedules for a particular job template from the Schedules tab. Procedure To schedule a job template, select the Schedules tab, and choose the appropriate method: If schedules are already set up, review, edit, enable or disable your schedule preferences. If schedules have not been set up, see Schedules for more information. If you select Prompt on Launch for the Credentials field , and you create or edit scheduling information for your job template, a Prompt option displays on the Schedules form. You cannot remove the default machine credential in the Prompt dialog without replacing it with another machine credential before you can save it. Note To set extra_vars on schedules, you must select Prompt on Launch for Variables on the job template, or configure and enable a survey on the job template. The answered survey questions then become extra_vars . 20.7. Surveys in job templates Job types of Run or Check provide a way to set up surveys in the Job Template creation or editing screens. Surveys set extra variables for the playbook similar to Prompt for Extra Variables does, but in a user-friendly question and answer way. Surveys also permit for validation of user input. Select the Survey tab to create a survey. Example Surveys can be used for a number of situations. For example, operations want to give developers a "push to stage" button that they can run without advance knowledge of Ansible. When launched, this task could prompt for answers to questions such as "What tag should we release?". Many types of questions can be asked, including multiple-choice questions. 20.7.1. Creating a survey Procedure From the Survey tab, click Add . A survey can consist of any number of questions. For each question, enter the following information: Question : The question to ask the user. Optional: Description : A description of what is being asked of the user. Answer variable name : The Ansible variable name to store the user's response in. This is the variable to be used by the playbook. Variable names cannot contain spaces. Answer type : Choose from the following question types: Text : A single line of text. You can set the minimum and maximum length (in characters) for this answer. Textarea : A multi-line text field. You can set the minimum and maximum length (in characters) for this answer. Password : Responses are treated as sensitive information, much like an actual password is treated. You can set the minimum and maximum length (in characters) for this answer. Multiple Choice (single select) : A list of options, of which only one can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field. Multiple Choice (multiple select) : A list of options, any number of which can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field. Integer : An integer number. You can set the minimum and maximum length (in characters) for this answer. Float : A decimal number. You can set the minimum and maximum length (in characters) for this answer. Required : Whether or not an answer to this question is required from the user. Minimum length and Maximum length : Specify if a certain length in the answer is required. Default answer : The default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user. Once you have entered the question information, click Save to add the question. The survey question displays in the Survey list. For any question, you can click to edit it. Check the box to each question and click Delete to delete the question, or use the toggle option in the menu bar to enable or disable the survey prompts. If you have more than one survey question, click Edit Order to rearrange the order of the questions by clicking and dragging on the grid icon. To add more questions, click Add . 20.7.2. Optional survey questions The Required setting on a survey question determines whether the answer is optional or not for the user interacting with it. Optional survey variables can also be passed to the playbook in extra_vars . If a non-text variable (input type) is marked as optional, and is not filled in, no survey extra_var is passed to the playbook. If a text input or text area input is marked as optional, is not filled in, and has a minimum length > 0 , no survey extra_var is passed to the playbook. If a text input or text area input is marked as optional, is not filled in, and has a minimum length === 0 , that survey extra_var is passed to the playbook, with the value set to an empty string (""). 20.8. Launching a job template A benefit of automation controller is the push-button deployment of Ansible playbooks. You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line. Easier deployments drive consistency, by running your playbooks the same way each time, and allowing you to delegate responsibilities. Procedure Launch a job template by using one of these methods: From the navigation panel, select Resources Templates and click Launch to the job template. In the job template Details view of the job template you want to launch, click Launch . A job can require additional information to run. The following data can be requested at launch: Credentials that were setup The option Prompt on Launch is selected for any parameter Passwords or passphrases that have been set to Ask A survey, if one has been configured for the job templates Extra variables, if requested by the job template Note If a job has user-provided values, then those are respected upon relaunch. If the user did not specify a value, then the job uses the default value from the job template. Jobs are not relaunched as-is. They are relaunched with the user prompts re-applied to the job template. If you provide values on one tab, return to a tab, continuing to the tab results in having to re-provide values on the rest of the tabs. Ensure that you fill in the tabs in the order that the prompts appear. When launching, automation controller automatically redirects the web browser to the Job Status page for this job under the Jobs tab. You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified inventory. For more information, see the Jobs section. When slice jobs are running, job lists display the workflow and job slices, as well as a link to view their details individually. Note You can launch jobs in bulk using the newly added endpoint in the API, /api/v2/bulk/job_launch . This endpoint accepts JSON and you can specify a list of unified job templates (such as job templates and project updates) to launch. The user must have the appropriate permission to launch all the jobs. If all jobs are not launched an error is returned indicating why the operation was not able to complete. Use the OPTIONS request to return relevant schema. For more information, see the Bulk endpoint of the Reference section of the Automation Controller API Guide. 20.9. Copying a job template If you copy a job template, it does not copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or administrator creating the copy of the job template. The user copying the Job Template is be granted administrator permission, but no permissions are assigned (copied) to the job template. Procedure From the navigation panel, select Resources Templates . Click the icon associated with the template that you want to copy. The new template with the name of the template from which you copied and a timestamp displays in the list of templates. Click to open the new template and click Edit . Replace the contents of the Name field with a new name, and provide or modify the entries in the other fields to complete this page. Click Save . 20.10. Scan job templates Scan jobs are no longer supported starting with automation controller 3.2. This system tracking feature was used as a way to capture and store facts as historical data. Facts are now stored in the controller through fact caching. For more information, see Fact Caching . Job template scan jobs in your system before automation controller 3.2, are converted to type run, like normal job templates. They retain their associated resources, such as inventories and credentials. By default, job template scan jobs that do not have a related project are assigned a special playbook. You can also specify a project with your own scan playbook. A project is created for each organization that points to awx-facts-playbooks and the job template was set to the playbook: https://github.com/ansible/tower-fact-modules/blob/master/scan_facts.yml . 20.10.1. Fact scan playbooks The scan job playbook, scan_facts.yml , contains invocations of three fact scan modules - packages, services, and files, along with Ansible's standard fact gathering. The scan_facts.yml playbook file is similar to this: - hosts: all vars: scan_use_checksum: false scan_use_recursive: false tasks: - scan_packages: - scan_services: - scan_files: paths: '{{ scan_file_paths }}' get_checksum: '{{ scan_use_checksum }}' recursive: '{{ scan_use_recursive }}' when: scan_file_paths is defined The scan_files fact module is the only module that accepts parameters, passed through extra_vars on the scan job template: scan_file_paths : /tmp/ scan_use_checksum : true scan_use_recursive : true The scan_file_paths parameter can have multiple settings (such as /tmp/ or /var/log ). The scan_use_checksum and scan_use_recursive parameters can also be set to false or omitted. An omission is the same as a false setting. Scan job templates should enable become and use credentials for which become is a possibility. You can enable become by checking Privilege Escalation from the options list: 20.10.2. Supported OSes for scan_facts.yml If you use the scan_facts.yml playbook with use fact cache, ensure that you are using one of the following supported operating systems: Red Hat Enterprise Linux 5, 6, 7, 8, and 9 Ubuntu 23.04 (Support for Ubuntu is deprecated and will be removed in a future release) OEL 6 and 7 SLES 11 and 12 Debian 6, 7, 8, 9, 10, 11, and 12 Fedora 22, 23, and 24 Amazon Linux 2023.1.20230912 Some of these operating systems require initial configuration to run python or have access to the python packages, such as python-apt , which the scan modules depend on. 20.10.3. Pre-scan setup The following are examples of playbooks that configure certain distributions so that scan jobs can be run against them: Bootstrap Ubuntu (16.04) --- - name: Get Ubuntu 16, and on ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo apt-get -y update raw: sudo apt-get -y install python-simplejson raw: sudo apt-get install python-apt Bootstrap Fedora (23, 24) --- - name: Get Fedora ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo dnf -y update raw: sudo dnf -y install python-simplejson raw: sudo dnf -y install rpm-python 20.10.4. Custom fact scans A playbook for a custom fact scan is similar to the example in the Fact scan playbooks section. For example, a playbook that only uses a custom scan_foo Ansible fact module looks similar to this: scan_foo.py: def main(): module = AnsibleModule( argument_spec = dict()) foo = [ { "hello": "world" }, { "foo": "bar" } ] results = dict(ansible_facts=dict(foo=foo)) module.exit_json(**results) main() To use a custom fact module, ensure that it lives in the /library/ subdirectory of the Ansible project used in the scan job template. This fact scan module returns a hard-coded set of facts: [ { "hello": "world" }, { "foo": "bar" } ] For more information, see the Developing modules section of the Ansible documentation. 20.10.5. Fact caching Automation controller can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This enables you to use job templates with --limit while still having access to the entire inventory of host facts. A global timeout setting that the plugin enforces per-host, can be specified (in seconds) by going to Settings and selecting Job settings from the Jobs option: After launching a job that uses fact cache ( use_fact_cache=True ), each host's ansible_facts are all stored by the controller in the job's inventory. The Ansible Fact Cache plugin that ships with automation controller is enabled on jobs with fact cache enabled ( use_fact_cache=True ). When a job that has fact cache enabled ( use_fact_cache=True ) has run, automation controller restores all records for the hosts in the inventory. Any records with update times newer than the currently stored facts per-host are updated in the database. New and changed facts are logged through automation controller's logging facility. Specifically, to the system_tracking namespace or logger. The logging payload includes the following fields: host_name inventory_id ansible_facts ansible facts is a dictionary of all Ansible facts for host_name in the automation controller inventory, inventory_id . Note If a hostname includes a forward slash (/), fact cache does not work for that host. If you have an inventory with 100 hosts and one host has a / in the name, the remaining 99 hosts still collect facts. 20.10.6. Benefits of fact caching Fact caching saves you time over running fact gathering. If you have a playbook in a job that runs against a thousand hosts and forks, you can spend 10 minutes gathering facts across all of those hosts. However, if you run a job on a regular basis, the first run of it caches these facts and the run pulls them from the database. This reduces the runtime of jobs against large inventories, including Smart Inventories. Note Do not modify the ansible.cfg file to apply fact caching. Custom fact caching could conflict with the controller's fact caching feature. You must use the fact caching module that comes with automation controller. You can choose to use cached facts in your job by enabling it in the Options field of the job templates window. To clear facts, run the Ansible clear_facts meta task . The following is an example playbook that uses the Ansible clear_facts meta task. - hosts: all gather_facts: false tasks: - name: Clear gathered facts from all currently targeted hosts meta: clear_facts You can find the API endpoint for fact caching at: http://<controller server name>/api/v2/hosts/x/ansible_facts 20.11. Use Cloud Credentials with a cloud inventory Cloud Credentials can be used when syncing a cloud inventory. They can also be associated with a job template and included in the runtime environment for use by a playbook. The following Cloud Credentials are supported: Openstack Amazon Web Services Google Azure VMware 20.11.1. OpenStack The following sample playbook invokes the nova_compute Ansible OpenStack cloud module and requires credentials: auth_url username password project name These fields are made available to the playbook through the environmental variable OS_CLIENT_CONFIG_FILE , which points to a YAML file written by the controller based on the contents of the cloud credential. The following sample playbooks load the YAML file into the Ansible variable space: OS_CLIENT_CONFIG_FILE example: clouds: devstack: auth: auth_url: http://devstack.yoursite.com:5000/v2.0/ username: admin password: your_password_here project_name: demo Playbook example: - hosts: all gather_facts: false vars: config_file: "{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}" nova_tenant_name: demo nova_image_name: "cirros-0.3.2-x86_64-uec" nova_instance_name: autobot nova_instance_state: 'present' nova_flavor_name: m1.nano nova_group: group_name: antarctica instance_name: deceptacon instance_count: 3 tasks: - debug: msg="{{ config_file }}" - stat: path="{{ config_file }}" register: st - include_vars: "{{ config_file }}" when: st.stat.exists and st.stat.isreg - name: "Print out clouds variable" debug: msg="{{ clouds|default('No clouds found') }}" - name: "Setting nova instance state to: {{ nova_instance_state }}" local_action: module: nova_compute login_username: "{{ clouds.devstack.auth.username }}" login_password: "{{ clouds.devstack.auth.password }}" 20.11.2. Amazon Web Services Amazon Web Services (AWS) cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup): AWS_ACCESS_KEY_ID AWS-SECRET_ACCESS_KEY Each AWS module implicitly uses these credentials when run through the controller without having to set the aws_access_key_id or aws_secret_access_key module options. 20.11.3. Google Google cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup): GCE_EMAIL GCE_PROJECT GCE_CREDENTIALS_FILE_PATH Each Google module implicitly uses these credentials when run through the controller without having to set the service_account_email , project_id , or pem_file module options. 20.11.4. Azure Azure cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup): AZURE_SUBSCRIPTION_ID AZURE_CERT_PATH Each Azure module implicitly uses these credentials when run via the controller without having to set the subscription_id or management_cert_path module options. 20.11.5. VMware VMware cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup): VMWARE_USER VMWARE_PASSWORD VMWARE_HOST The following sample playbook demonstrates the usage of these credentials: - vsphere_guest: vcenter_hostname: "{{ lookup('env', 'VMWARE_HOST') }}" username: "{{ lookup('env', 'VMWARE_USER') }}" password: "{{ lookup('env', 'VMWARE_PASSWORD') }}" guest: newvm001 from_template: yes template_src: linuxTemplate cluster: MainCluster resource_pool: "/Resources" vm_extra_config: folder: MyFolder 20.12. Provisioning Callbacks Provisioning Callbacks are a feature of automation controller that enable a host to initiate a playbook run against itself, rather than waiting for a user to launch a job to manage the host from the automation controller console. Provisioning Callbacks are only used to run playbooks on the calling host and are meant for cloud bursting. Cloud bursting is a cloud computing configuration that enables a private cloud to access public cloud resources by "bursting" into a public cloud when computing demand spikes. Example New instances with a need for client to server communication for configuration, such as transmitting an authorization key, not to run a job against another host. This provides for automatically configuring the following: A system after it has been provisioned by another system (such as AWS auto-scaling, or an OS provisioning system like kickstart or preseed). Launching a job programmatically without invoking the automation controller API directly. The job template launched only runs against the host requesting the provisioning. This is often accessed with a firstboot type script or from cron . 20.12.1. Enabling Provisioning Callbacks Procedure To enable callbacks, check the Provisioning Callbacks checkbox in the job template. This displays Provisioning Callback URL for the job template. Note If you intend to use automation controller's provisioning callback feature with a dynamic inventory, set Update on Launch for the inventory group used in the job template. Callbacks also require a Host Config Key, to ensure that foreign hosts with the URL cannot request configuration. Provide a custom value for Host Config Key. The host key can be reused across multiple hosts to apply this job template against multiple hosts. If you want to control what hosts are able to request configuration, the key may be changed at any time. To callback manually using REST: Procedure Look at the callback URL in the UI, in the form: https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/ The "7" in the sample URL is the job template ID in automation controller. Ensure that the request from the host is a POST. The following is an example using curl (all on a single line): curl -k -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \ https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/ Ensure that the requesting host is defined in your inventory for the callback to succeed. Troubleshooting If automation controller fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. When running a job template in this way, ensure that the host initiating the playbook run against itself is in the inventory. If the host is missing from the inventory, the job template fails with a No Hosts Matched type error message. If your host is not in the inventory and Update on Launch is set for the inventory group automation controller attempts to update cloud based inventory sources before running the callback. Verification Successful requests result in an entry on the Jobs tab, where you can view the results and history. You can access the callback using REST, but the suggested method of using the callback is to use one of the example scripts that ships with automation controller: /usr/share/awx/request_tower_configuration.sh (Linux/UNIX) /usr/share/awx/request_tower_configuration.ps1 (Windows) Their usage is described in the source code of the file by passing the -h flag, as the following shows: ./request_tower_configuration.sh -h Usage: ./request_tower_configuration.sh <options> Request server configuration from Ansible Tower. OPTIONS: -h Show this message -s Controller server (e.g. https://ac.example.com) (required) -k Allow insecure SSL connections and transfers -c Host config key (required) -t Job template ID (required) -e Extra variables This script can retry commands and is therefore a more robust way to use callbacks than a simple curl request. The script retries once per minute for up to ten minutes. Note This is an example script. Edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry. You can use callbacks with dynamic inventory in automation controller. For example, when pulling cloud inventory from one of the supported cloud providers. In these cases, along with setting Update On Launch , ensure that you configure an inventory cache timeout for the inventory source, to avoid hammering of your cloud's API endpoints. Since the request_tower_configuration.sh script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes. Running the request_tower_configuration.sh script from a cron job is not recommended, however, a suggested cron interval is every 30 minutes. Repeated configuration can be handled by scheduling automation controller so that the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration when coming online. Running at first boot is best practice. First boot scripts are init scripts that typically self-delete, so you set up an init script that calls a copy of the request_tower_configuration.sh script and make that into an auto scaling image. 20.12.2. Passing extra variables to Provisioning Callbacks You can pass extra_vars in Provisioning Callbacks the same way you can in a regular job template. To pass extra_vars , the data sent must be part of the body of the POST as application or JSON, as the content type. Procedure Pass extra variables by using one of these methods: Use the following JSON format as an example when adding your own extra_vars to be passed: '{"extra_vars": {"variable1":"value1","variable2":"value2",...}}' Pass extra variables to the job template call using curl : root@localhost:~USD curl -f -H 'Content-Type: application/json' -XPOST \ -d '{"host_config_key": "redhat", "extra_vars": "{\"foo\": \"bar\"}"}' \ https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback For more information, see Launching Jobs with Curl in the Automation controller Administration Guide . 20.13. Extra variables When you pass survey variables, they are passed as extra variables ( extra_vars ) within automation controller. However, passing extra variables to a job template (as you would do with a survey) can override other variables being passed from the inventory and project. By default, extra_vars are marked as !unsafe unless you specify them on the Job Template's Extra Variables section. These are trusted, because they can only be added by users with enough privileges to add or edit a Job Template. For example, nested variables do not expand when entered as a prompt, as the Jinja brackets are treated as a string. For more information about unsafe variables, see Unsafe or raw strings . Note extra_vars passed to the job launch API are only honored if one of the following is true: They correspond to variables in an enabled survey. ask_variables_on_launch is set to True . Example You have a defined variable for an inventory for debug = true . It is possible that this variable, debug = true , can be overridden in a job template survey. To ensure the variables that you pass are not overridden, ensure they are included by redefining them in the survey. Extra variables can be defined at the inventory, group, and host levels. If you are specifying the ALLOW_JINJA_IN_EXTRA_VARS parameter, see the Controller Tips and Tricks section of the Automation controller Administration Guide to configure it in the Jobs Settings screen of the controller UI. The job template extra variables dictionary is merged with the survey variables. The following are some simplified examples of extra_vars in YAML and JSON formats: The configuration in YAML format: launch_to_orbit: true satellites: - sputnik - explorer - satcom The configuration in JSON format: { "launch_to_orbit": true, "satellites": ["sputnik", "explorer", "satcom"] } The following table notes the behavior (hierarchy) of variable precedence in automation controller as it compares to variable precedence in Ansible. Table 20.1. Automation controller Variable Precedence Hierarchy (last listed wins) Ansible automation controller role defaults role defaults dynamic inventory variables dynamic inventory variables inventory variables automation controller inventory variables inventory group_vars automation controller group variables inventory host_vars automation controller host variables playbook group_vars playbook group_vars playbook host_vars playbook host_vars host facts host facts registered variables registered variables set facts set facts play variables play variables play vars_prompt (not supported) play vars_files play vars_files role and include variables role and include variables block variables block variables task variables task variables extra variables Job Template extra variables Job Template Survey (defaults) Job Launch extra variables 20.13.1. Relaunch a job template Instead of manually relaunching a job, a relaunch is denoted by setting launch_type to relaunch . The relaunch behavior deviates from the launch behavior in that it does not inherit extra_vars . Job relaunching does not go through the inherit logic. It uses the same extra_vars that were calculated for the job being relaunched. Example You launch a job template with no extra_vars which results in the creation of a job called j1 . Then you edit the job template and add extra_vars (such as adding "{ "hello": "world" }" ). Relaunching j1 results in the creation of j2 , but because there is no inherit logic and j1 has no extra_vars, j2 does not have any extra_vars . If you launch the job template with the extra_vars that you added after the creation of j1 , the relaunch job created ( j3 ) includes the extra_vars. Relaunching j3 results in the creation of j4 , which also includes extra_vars . | [
"- hosts: all vars: scan_use_checksum: false scan_use_recursive: false tasks: - scan_packages: - scan_services: - scan_files: paths: '{{ scan_file_paths }}' get_checksum: '{{ scan_use_checksum }}' recursive: '{{ scan_use_recursive }}' when: scan_file_paths is defined",
"Bootstrap Ubuntu (16.04) --- - name: Get Ubuntu 16, and on ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo apt-get -y update raw: sudo apt-get -y install python-simplejson raw: sudo apt-get install python-apt Bootstrap Fedora (23, 24) --- - name: Get Fedora ready hosts: all sudo: yes gather_facts: no tasks: - name: install python-simplejson raw: sudo dnf -y update raw: sudo dnf -y install python-simplejson raw: sudo dnf -y install rpm-python",
"scan_foo.py: def main(): module = AnsibleModule( argument_spec = dict()) foo = [ { \"hello\": \"world\" }, { \"foo\": \"bar\" } ] results = dict(ansible_facts=dict(foo=foo)) module.exit_json(**results) main()",
"[ { \"hello\": \"world\" }, { \"foo\": \"bar\" } ]",
"- hosts: all gather_facts: false tasks: - name: Clear gathered facts from all currently targeted hosts meta: clear_facts",
"clouds: devstack: auth: auth_url: http://devstack.yoursite.com:5000/v2.0/ username: admin password: your_password_here project_name: demo",
"- hosts: all gather_facts: false vars: config_file: \"{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}\" nova_tenant_name: demo nova_image_name: \"cirros-0.3.2-x86_64-uec\" nova_instance_name: autobot nova_instance_state: 'present' nova_flavor_name: m1.nano nova_group: group_name: antarctica instance_name: deceptacon instance_count: 3 tasks: - debug: msg=\"{{ config_file }}\" - stat: path=\"{{ config_file }}\" register: st - include_vars: \"{{ config_file }}\" when: st.stat.exists and st.stat.isreg - name: \"Print out clouds variable\" debug: msg=\"{{ clouds|default('No clouds found') }}\" - name: \"Setting nova instance state to: {{ nova_instance_state }}\" local_action: module: nova_compute login_username: \"{{ clouds.devstack.auth.username }}\" login_password: \"{{ clouds.devstack.auth.password }}\"",
"- vsphere_guest: vcenter_hostname: \"{{ lookup('env', 'VMWARE_HOST') }}\" username: \"{{ lookup('env', 'VMWARE_USER') }}\" password: \"{{ lookup('env', 'VMWARE_PASSWORD') }}\" guest: newvm001 from_template: yes template_src: linuxTemplate cluster: MainCluster resource_pool: \"/Resources\" vm_extra_config: folder: MyFolder",
"curl -k -i -H 'Content-Type:application/json' -XPOST -d '{\"host_config_key\": \"redhat\"}' https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/",
"./request_tower_configuration.sh -h Usage: ./request_tower_configuration.sh <options> Request server configuration from Ansible Tower. OPTIONS: -h Show this message -s Controller server (e.g. https://ac.example.com) (required) -k Allow insecure SSL connections and transfers -c Host config key (required) -t Job template ID (required) -e Extra variables",
"'{\"extra_vars\": {\"variable1\":\"value1\",\"variable2\":\"value2\",...}}'",
"root@localhost:~USD curl -f -H 'Content-Type: application/json' -XPOST -d '{\"host_config_key\": \"redhat\", \"extra_vars\": \"{\\\"foo\\\": \\\"bar\\\"}\"}' https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback",
"launch_to_orbit: true satellites: - sputnik - explorer - satcom",
"{ \"launch_to_orbit\": true, \"satellites\": [\"sputnik\", \"explorer\", \"satcom\"] }"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-job-templates |
Chapter 3. Certificate profiles (Making Rules for Issuing Certificates) | Chapter 3. Certificate profiles (Making Rules for Issuing Certificates) Red Hat Certificate System provides a customizable framework to apply policies for incoming certificate requests and to control the input request types and output certificate types; these are called certificate profiles . Certificate profiles set the required information for certificate enrollment forms in the Certificate Manager end-entities page. This chapter describes how to configure certificate profiles. 3.1. About certificate profiles A certificate profile defines everything associated with issuing a particular type of certificate, including the authentication method, the authorization method, the default certificate content, constraints for the values of the content, and the contents of the input and output for the certificate profile. Enrollment and renewal requests are submitted to a certificate profile and are then subject to the defaults and constraints set in that certificate profile. These constraints are in place whether the request is submitted through the input form associated with the certificate profile or through other means. The certificate that is issued from a certificate profile request contains the content required by the defaults with the information required by the default parameters. The constraints provide rules for what content is allowed in the certificate. For details about using and customizing certificate profiles, see Section 3.2, "Setting up certificate profiles" . The Certificate System contains a set of default profiles. While the default profiles are created to satisfy most deployments, every deployment can add their own new certificate profiles or modify the existing profiles. Authentication. In every certification profile can be specified an authentication method. Authorization. In every certification profile can be specified an authorization method. Profile inputs. Profile inputs are parameters and values that are submitted to the CA when a certificate is requested. Profile inputs include public keys for the certificate request and the certificate subject name requested by the end entity for the certificate. Profile outputs. Profile outputs are parameters and values that specify the format in which to provide the certificate to the end entity. Profile outputs are CMC responses which contain a PKCS#7 certificate chain, when the request was successful. Certificate content. Each certificate defines content information, such as the name of the entity to which it is assigned (the subject name), its signing algorithm, and its validity period. What is included in a certificate is defined in the X.509 standard. With version 3 of the X509 standard, certificates can also contain extensions. For more information about certificate extensions, see Section B.3, "Standard X.509 v3 certificate extension reference" . All of the information about a certificate profile is defined in the set entry of the profile policy in the profile's configuration file. When multiple certificates are expected to be requested at the same time, multiple set entries can be defined in the profile policy to satisfy needs of each certificate. Each policy set consists of a number of policy rules and each policy rule describes a field in the certificate content. A policy rule can include the following parts: Profile defaults. These are predefined parameters and allowed values for information contained within the certificate. Profile defaults include the validity period of the certificate, and what certificate extensions appear for each type of certificate issued. Profile constraints. Constraints set rules or policies for issuing certificates. Amongst other, profile constraints include rules to require the certificate subject name to have at least one CN component, to set the validity of a certificate to a maximum of 360 days, to define the allowed grace period for renewal, or to require that the subjectaltname extension is always set to true . 3.1.1. The enrollment profile The parameters for each profile defining the inputs, outputs, and policy sets are listed in more detail in Profile configuration file parameters in the Planning, Installation and Deployment Guide (Common Criteria Edition) . A profile usually contains inputs, policy sets, and outputs, as illustrated in the caUserCert profile in the following example. .Example caCMCUserCert Profile The first part of a certificate profile is the description. This shows the name, long description, whether it is enabled, and who enabled it. Note The missing auth.instance_id= entry in this profile means that with this profile, authentication is not needed to submit the enrollment request. However, manual approval by an authorized CA agent will be required to get an issuance. , the profile lists all of the required inputs for the profile: For the caCMCUserCert profile, this defines the certificate request type, which is CMC. , the profile must define the output, meaning the format of the final certificate. The only one available is certOutputImpl , which results in CMC response to be returned to the requestor in case of success. The last - largest - block of configuration is the policy set for the profile. Policy sets list all of the settings that are applied to the final certificate, like its validity period, its renewal settings, and the actions the certificate can be used for. The policyset.list parameter identifies the block name of the policies that apply to one certificate; the policyset.userCertSet.list lists the individual policies to apply. For example, the sixth policy populates the Key Usage Extension automatically in the certificate, according to the configuration in the policy. It sets the defaults and requires the certificate to use those defaults by setting the constraints: 3.1.2. Certificate extensions: defaults and constraints An extension configures additional information to include in a certificate or rules about how the certificate can be used. These extensions can either be specified in the certificate request or taken from the profile default definition and then enforced by the constraints. A certificate extension is added or identified in a profile by adding the default which corresponds to the extension and sets default values, if the certificate extension is not set in the request. For example, the Basic Constraints Extension identifies whether a certificate is a CA signing certificate, the maximum number of subordinate CAs that can be configured under the CA, and whether the extension is critical (required): The extension can also set required values for the certificate request called constraints . If the contents of a request do not match the set constraints, then the request is rejected. The constraints generally correspond to the extension default, though not always. For example: NOTE To allow user supplied extensions to be embedded in the certificate requests and ignore the system-defined default in the profile, the profile needs to contain the User Supplied Extension Default, which is described in Section B.1.32, "User Supplied extension default" . 3.1.3. Inputs and outputs Inputs set information that must be submitted to receive a certificate. This can be requester information, a specific format of certificate request, or organizational information. In a Common Criteria environment, set the input.i1.class_id parameter in all enabled profiles to cmcCertReqInputImpl : The outputs configured in the profile define the format of the certificate that is issued. In a Common Criteria environment, set the output.o1.class_id parameter in all enabled profiles to certOutputImpl : In a Common Criteria-compliant Certificate System environment, users access profiles through the /ca/ee/ca/profileSubmitUserSignedCMCFull servlet that is accessed through the end-entities interface. 3.2. Setting up certificate profiles In Certificate System, you can add, delete, and modify enrollment profiles: Using the PKI command-line interface Editing the profile configuration files directly (this is recommended only at time of installation configuration; see Chapter 11 Configuring certificate profiles in the Planning, Installation and Deployment Guide (Common Criteria Edition) . This section provides information on the pki CLI method. 3.2.1. Managing certificate enrollment profiles using the pki command-line interface This section describes how to manage certificate profiles using the pki utility. For further details, see the pki-ca-profile(1) man page. Note Using the raw format is recommended. For details on each attribute and field of the profile, see Chapter 11 Configuring certificate profiles in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 3.2.2. Enabling and disabling a certificate profile Before you can edit a certificate profile, you must disable it. After the modification is complete, you can re-enable the profile. Note Only CA agents can enable and disable certificate profiles. For example, to disable the caCMCECserverCert certificate profile: For example, to enable the caCMCECserverCert certificate profile: 3.2.2.1. Creating a certificate profile in raw format To create a new profile in raw format: Note In raw format, specify the new profile ID as follows: 3.2.2.2. Editing a certificate profile in raw format CA administrators can edit a certificate profile in raw format without manually downloading the configuration file. For example, to edit the caCMCECserverCert profile: This command automatically downloads the profile configuration in raw format and opens it in the VI editor. When you close the editor, the profile configuration is updated on the server. You do not need to restart the CA after editing a profile. Important Before you can edit a profile, disable the profile. For details, see Section 3.2.2, "Enabling and disabling a certificate profile" . Example 3.1. Editing a certificate profile in raw format For example, to edit the caCMCserverCert profile to accept multiple user-supplied extensions: Disable the profile as a CA agent: Edit the profile as a CA administrator: Download and open the profile in the VI editor: Update the configuration to accept the extensions. For details, see Example B.3, "Multiple user supplied extensions in CSR" . Enable the profile as a CA agent: 3.2.2.3. Deleting a certificate profile To delete a certificate profile: Important Before you can delete a profile, disable the profile. For details, see Section 3.2.2, "Enabling and disabling a certificate profile" . 3.2.3. Listing certificate enrollment profiles The following pre-defined certificate profiles are ready to use and set up in this environment when the Certificate System CA is installed. These certificate profiles have been designed for the most common types of certificates, and they provide common defaults, constraints, authentication methods, inputs, and outputs. For a list of supported profiles, see Section 8.3, "CMC authentication plugins" To list the available profiles on the command line, use the pki utility. For example: For further details, see the pki-ca-profile(1) man page. Additional information can also be found in Chapter 11 Configuring certificate profiles in the Planning, Installation and Deployment Guide (Common Criteria Edition) . 3.2.4. Displaying details of a certificate enrollment profile For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert : For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert , in raw format: For further details, see the pki-ca-profile(1) man page. 3.3. Defining key defaults in profiles When creating certificate profiles, the Key Default must be added before the Subject Key Identifier Default . Certificate System processes the key constraints in the Key Default before creating or applying the Subject Key Identifier Default, so if the key has not been processed yet, setting the key in the subject name fails. For example, an object-signing profile may define both defaults: In the policyset list, then, the Key Default ( p11 ) must be listed before the Subject Key Identifier Default ( p3 ). 3.4. Configuring profiles to enable renewal This section discusses how to set up profiles for certificate renewals. Renewing a certificate regenerates the certificate using the same public key as the original certificate. Renewing a certificate can be preferable to simply generating new keys and installing new certificates; for example, if a new CA signing certificate is created, all of the certificates which that CA issued and signed must be reissued. If the CA signing certificate is renewed, then all of the issued certificates are still valid. A renewed certificate is identical to the original, only with an updated validity period and expiration date. This makes renewing certificates a much simpler and cleaner option for handling the expiration of many kinds of certificates, especially CA signing certificates. For more information on how to renew certificates, see Section 5.4, "Renewing certificates" . 3.4.1. The renewal process There are two methods of renewing a certificate: Regenerating the certificate takes the original key, profile, and request of the certificate and recreates a new certificate with a new validity period and expiration date using the identical key. Re-keying a certificate submits a certificate request through the original profile with the same information, so that a new key pair is generated. A profile that allows renewal is often accompanied by the renewGracePeriodConstraint entry. For example: NOTE The Renew Grace Period Constraint should be set in the original enrollment profile. This defines the amount of time before and after the certificate's expiration date when the user is allowed to renew the certificate. There are only a few examples of these in the default profiles, and they are mostly not enabled by default. This entry is not required; however, if no grace period is set, it is only possible to renew a certificate on the date of its expiration. 3.4.2. Renewing using the same key A profile that allows the same key to be submitted for renewal has the allowSameKeyRenewal parameter set to true in the uniqueKeyConstraint entry. For example: 3.4.3. Renewing using a new key To renew a certificate with a new key, use the same profile with a new key. Certificate System uses the subjectDN from the user signing certificate that signs the request for the new certificate. 3.5. Setting the signing algorithms for certificates The CA's signing certificate can sign the certificates it issues with any public key algorithm supported by the CA. Red Hat Certificate System supports ECC and RSA. Both public key algorithms support different cipher suites, algorithms used to encrypt and decrypt data. Each certificate enrollment profile can define which cipher suite the CA should use to sign certificates processed through that profile. If no signing algorithm is set, then the profile uses the default signing algorithm set at installation (see Changing the signing algorithms ). 3.6. Managing CA-related profiles Certificate profiles and extensions must be used to set rules on how subordinate CAs can issue certificates. There are two parts to this: Managing the CA signing certificate Defining issuance rules 3.6.1. Setting restrictions on CA certificates When a subordinate CA is created, the root CA can impose limits or restrictions on the subordinate CA. For example, the root CA can dictate the maximum depth of valid certification paths (the number of subordinate CAs allowed to be chained below the new CA) by setting the pathLenConstraint field of the Basic Constraints extension in the CA signing certificate. A certificate chain generally consists of an entity certificate, zero or more intermediate CA certificates, and a root CA certificate. The root CA certificate is either self-signed or signed by an external trusted CA. Once issued, the root CA certificate is loaded into a certificate database as a trusted CA. An exchange of certificates takes place when performing a TLS handshake, when sending an S/MIME message, or when sending a signed object. As part of the handshake, the sender is expected to send the subject certificate and any intermediate CA certificates needed to link the subject certificate to the trusted root. For certificate chaining to work properly the certificates should have the following properties: CA certificates must have the Basic Constraints extension. CA certificates must have the keyCertSign bit set in the Key Usage extension. When the CAs generate new keys, they must add the Authority Key Identifier extension to all subject certificates. This extension helps distinguish the certificates from the older CA certificates. The CA certificates must contain the Subject Key Identifier extension. For more information on certificates and their extensions, see Internet X.509 Public Key Infrastructure - Certificate and Certificate Revocation List (CRL) Profile (RFC 5280) , available at RFC 5280 . These extensions can be configured through the certificate profile enrollment pages. By default, the CA contains the required and reasonable configuration settings, but it is possible to customize these settings. Note This procedure describes editing the CA certificate profile used by a CA to issue CA certificates to its subordinate CAs. The profile that is used when a CA instance is first configured is /var/lib/pki/instance_name/ca/conf/caCert.profile . This profile cannot be edited in pkiconsole (since it is only available before the instance is configured). It is possible to edit the policies for this profile in the template file before the CA is configured using a text editor. To modify the default in the CA signing certificate profile used by a CA: If the profile is currently enabled, it must be disabled before it can be edited. Open the agent services page, select Manage Certificate Profiles from the left navigation menu, select the profile, and click Disable profile . Open the CA Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. In the left navigation tree of the Configuration tab, select Certificate Manager , then Certificate Profiles . Select caCACert, or the appropriate CA signing certificate profile, from the right window, and click Edit/View . In the Policies tab of the Certificate Profile Rule Editor , select and edit the Key Usage or Extended Key Usage Extension Default if it exists or add it to the profile. Select the Key Usage or Extended Key Usage Extension Constraint, as appropriate, for the default. Set the default values for the CA certificates. For more information, see Section B.1.13, "Key Usage extension default" and Section B.1.8, "Extended Key Usage extension default" . Set the constraint values for the CA certificates. There are no constraints to be set for a Key Usage extension; for an Extended Key Usage extension, set the appropriate OID constraints for the CA. For more information, see Section B.1.8, "Extended Key Usage extension default" . When the changes have been made to the profile, log into the agent services page again, and re-enable the certificate profile. For more information on modifying certificate profiles, see Section 3.2, "Setting up certificate profiles" . 3.6.2. Changing the restrictions for CAs on issuing certificates The restrictions on the certificates issued are set by default after the subsystem is configured. These include: Whether certificates can be issued with validity periods longer than the CA signing certificate. The default is to disallow this. The signing algorithm used to sign certificates. The serial number range the CA is able to use to issue certificates. Subordinate CAs have constraints for the validity periods, types of certificates, and the types of extensions which they can issue. It is possible for a subordinate CA to issue certificates that violate these constraints, but a client authenticating a certificate that violates those constraints will not accept that certificate. Check the constraints set on the CA signing certificate before changing the issuing rules for a subordinate CA. To change the certificate issuance rules: Open the Certificate System Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. Select the Certificate Manager item in the left navigation tree of the Configuration tab. Figure 3.1. The General Settings tab in non-subordinate CAs by default By default, in non-cloned CAs, the General Settings tab of the Certificate Manager menu item contains these options: Override validity nesting requirement. This checkbox sets whether the Certificate Manager can issue certificates with validity periods longer than the CA signing certificate validity period. If this checkbox is not selected and the CA receives a request with validity period longer than the CA signing certificate's validity period, it automatically truncates the validity period to end on the day the CA signing certificate expires. Certificate Serial Number. These fields display the serial number range for certificates issued by the Certificate Manager. The server assigns the serial number in the serial number field to the certificate it issues and the number in the Ending serial number to the last certificate it issues. The serial number range allows multiple CAs to be deployed and balances the number of certificates each CA issues. The combination of an issuer name and a serial number uniquely identifies a certificate. NOTE The serial number ranges with cloned CAs are fluid. All cloned CAs share a common configuration entry which defines the available range. When one CA starts running low on available numbers, it checks this configuration entry and claims the range. The entry is automatically updated, so that the CA gets a new range. The ranges are defined in begin*Number and end*Number attributes, with separate ranges defined for requests and certificate serial numbers. For example: Serial number management can be enabled for CAs which are not cloned. However, by default, serial number management is disabled unless a system is cloned, when it is automatically enabled. The serial number range cannot be updated manually through the console. The serial number ranges are read-only fields. Default Signing Algorithm. Specifies the signing algorithm the Certificate Manager uses to sign certificates. The options are SHA256withRSA , and SHA512withRSA , if the CA's signing key type is RSA. The signing algorithm specified in the certificate profile configuration overrides the algorithm set here. By default, in cloned CAs, the General Settings tab of the Certificate Manager menu item contains these options: Enable serial number management Enable random certificate serial numbers Select both check boxes. Figure 3.2. The General Settings tab in cloned CAs by default Click Save . 3.6.3. Using random certificate serial numbers Red Hat Certificate System contains a serial number range management for requests, certificates, and replica IDs. This allows the automation of cloning when installing Identity Management (IdM). There are these ways to reduce the likelihood of hash-based attacks: making part of the certificate serial number unpredictable to the attacker adding a randomly chosen component to the identity making the validity dates unpredictable to the attacker by skewing each one forwards or backwards The random certificate serial number assignment method adds a randomly chosen component to the identity. This method: works with cloning allows resolving conflicts is compatible with the current serial number management method is compatible with the current workflows for administrators, agents, and end entities fixes the existing bugs in sequential serial number management Note Administrators must enable random certificate serial numbers. Enabling random certificate serial numbers You can enable automatic serial number range management either from the command line or from the console UI. To enable automatic serial number management from the console UI: Tick the Enable serial number management option in the General Settings tab. Figure 3.3. The General Settings tab when random serial number assignment is enabled Tick the Enable random certificate serial numbers option. 3.7. Managing subject names and subject alternative names The subject name of a certificate is a distinguished name (DN) that contains identifying information about the entity to which the certificate is issued. This subject name can be built from standard LDAP directory components, such as common names and organizational units. These components are defined in X.500. In addition to - or even in place of - the subject name, the certificate can have a subject alternative name , which is a kind of extension set for the certificate that includes additional information that is not defined in X.500. The naming components for both subject names and subject alternative names can be customized. IMPORTANT If the subject name is empty, then the Subject Alternative Name extension must be present and marked critical. 3.7.1. Using the requester CN or UID in the subject name The cn or uid value from a certificate request can be used to build the subject name of the issued certificate. This section demonstrates a profile that requires the naming attribute (CN or UID) being specified in the Subject Name Constraint to be present in the certificate request. If the naming attribute is missing, the request is rejected. There are two parts to this configuration: The CN or UID format is set in the pattern configuration in the Subject Name Constraint. The format of the subject DN, including the CN or UID token and the specific suffix for the certificate, is set in the Subject Name Default. For example, to use the CN in the subject DN: In this example, if a request comes in with the CN of cn=John Smith , then the certificate will be issued with a subject DN of cn=John Smith,DC=example, DC=com . If the request comes in but it has a UID of uid=jsmith and no CN, then the request is rejected. The same configuration is used to pull the requester UID into the subject DN: The format for the pattern parameter is covered in Section B.2.11, "Subject Name constraint" and Section B.1.27, "Subject Name default" . 3.7.2. Inserting LDAP directory attribute values and other information into the subject alt name Information from an LDAP directory or that was submitted by the requester can be inserted into the subject alternative name of the certificate by using matching variables in the Subject Alt Name Extension Default configuration. This default sets the type (format) of information and then the matching pattern (variable) to use to retrieve the information. For example: This inserts the requester's email as the first CN component in the subject alt name. To use additional components, increment the Type_ , Pattern_ , and Enable_ values numerically, such as Type_1 . Configuring the subject alt name is detailed in Section B.1.23, "Subject Alternative Name extension default" , as well. To insert LDAP components into the subject alt name of the certificate: Inserting LDAP attribute values requires enabling the user directory authentication plugin, SharedSecret . Open the CA Console. Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. Select Authentication in the left navigation tree. In the Authentication Instance tab, click Add , and add an instance of the SharedSecret authentication plugin. Enter the following information: Save the new plugin instance. For information on setting a CMC shared token, see Section 8.4.2, "Setting a CMC Shared Secret" . The ldapStringAttributes parameter instructs the authentication plugin to read the value of the mail attribute from the user's LDAP entry and put that value in the certificate request. When the value is in the request, the certificate profile policy can be set to insert that value for an extension value. The format for the dnpattern parameter is covered in Section B.2.11, "Subject Name constraint" and Section B.1.27, "Subject Name default" . To enable the CA to insert the LDAP attribute value in the certificate extension, edit the profile's configuration file, and insert a policy set parameter for an extension. For example, to insert the mail attribute value in the Subject Alternative Name extension in the caFullCMCSharedTokenCert profile, change the following code: For more details about editing a profile, see Section 3.2.2.2, "Editing a certificate profile in raw format" . Restart the CA. For this example, certificates submitted through the caFullCMCSharedTokenCert profile enrollment form will have the Subject Alternative Name extension added with the value of the requester's mail LDAP attribute. For example: There are many attributes which can be automatically inserted into certificates by being set as a token ( USDXUSD ) in any of the Pattern_ parameters in the policy set. The common tokens are listed in Table 3.1, "Variables used to populate certificates" , and the default profiles contain examples for how these tokens are used. Table 3.1. Variables used to populate certificates Policy Set Token Description USDrequest.auth_token.cn[0]USD The LDAP common name ( cn ) attribute of the user who requested the certificate. USDrequest.auth_token.mail[0]USD The value of the LDAP email ( mail ) attribute of the user who requested the certificate. USDrequest.auth_token.tokencertsubjectUSD The certificate subject name. USDrequest.auth_token.uidUSD The LDAP user ID ( uid ) attribute of the user who requested the certificate. USDrequest.auth_token.userdnUSD The user DN of the user who requested the certificate. USDrequest.auth_token.useridUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.uidUSD The value of the user ID attribute for the user who requested the certificate. USDrequest.requestor_emailUSD The email address of the person who submitted the request. USDrequest.requestor_nameUSD The person who submitted the request. USDrequest.upnUSD The Microsoft UPN. This has the format (UTF8String)1.3.6.1.4.1.311.20.2.3,USDrequest.upnUSD . USDserver.sourceUSD Instructs the server to generate a version 4 UUID (random number) component in the subject name. This always has the format (IA5String)1.2.3.4,USDserver.sourceUSD . USDrequest.auth_token.userUSD Used when the request was submitted by TPS. The TPS subsystem trusted manager who requested the certificate. USDrequest.subjectUSD Used when the request was submitted by TPS. The subject name DN of the entity to which TPS has resolved and requested for. For example, cn=John.Smith.123456789,o=TMS Org 3.7.3. Using the CN attribute in the SAN extension Several client applications and libraries no longer support using the Common Name (CN) attribute of the Subject DN for domain name validation, which has been deprecated in RFC 2818 . Instead, these applications and libraries use the dNSName Subject Alternative Name (SAN) value in the certificate request. Certificate System copies the CN only if it matches the preferred name syntax according to RFC 1034 Section 3.5 and has more than one component. Additionally, existing SAN values are preserved. For example, the dNSName value based on the CN is appended to existing SANs. To configure Certificate System to automatically use the CN attribute in the SAN extension, edit the certificate profile used to issue the certificates. For example: Disable the profile: Edit the profile: Add the following configuration with a unique set number for the profile. For example: The example uses 12 as the set number. Append the new policy set number to the policyset.userCertSet.list parameter. For example: Save the profile. Enable the profile: Note All default server profiles contain the commonNameToSANDefaultImpl default. 3.7.4. Accepting SAN extensions from a CSR In certain environments, administrators want to allow specifying Subject Alternative Name (SAN) extensions in Certificate Signing Request (CSR). 3.7.4.1. Configuring a profile to retrieve SANs from a CSR To allow retrieving SANs from a CSR, use the User Extension Default. For details, see Section B.1.32, "User Supplied extension default" . Note A SAN extension can contain one or more SANs. To accept SANs from a CSR, add the following default and constraint to a profile, such as caCMCECserverCert : 3.7.4.2. Generating a CSR with SANs For example, to generate a CSR with two SANs using the certutil utility: After generating the CSR, follow the steps described in Section 5.3.1, "The CMC enrollment process" to complete the CMC enrollment. | [
"desc=This certificate profile is for enrolling user certificates by using the CMC certificate request with CMC Signature authentication. visible=true enable=true enableBy=admin name=Signed CMC-Authenticated User Certificate Enrollment",
"input.list=i1 input.i1.class_id=cmcCertReqInputImp",
"output.list=o1 output.o1.class_id=certOutputImpl",
"policyset.list=userCertSet policyset.userCertSet.list=1,10,2,3,4,5,6,7,8,9 policyset.userCertSet.6.constraint.class_id=keyUsageExtConstraintImpl policyset.userCertSet.6.constraint.name=Key Usage Extension Constraint policyset.userCertSet.6.constraint.params.keyUsageCritical=true policyset.userCertSet.6.constraint.params.keyUsageDigitalSignature=true policyset.userCertSet.6.constraint.params.keyUsageNonRepudiation=true policyset.userCertSet.6.constraint.params.keyUsageDataEncipherment=false policyset.userCertSet.6.constraint.params.keyUsageKeyEncipherment=true policyset.userCertSet.6.constraint.params.keyUsageKeyAgreement=false policyset.userCertSet.6.constraint.params.keyUsageKeyCertSign=false policyset.userCertSet.6.constraint.params.keyUsageCrlSign=false policyset.userCertSet.6.constraint.params.keyUsageEncipherOnly=false policyset.userCertSet.6.constraint.params.keyUsageDecipherOnly=false policyset.userCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.userCertSet.6.default.name=Key Usage Default policyset.userCertSet.6.default.params.keyUsageCritical=true policyset.userCertSet.6.default.params.keyUsageDigitalSignature=true policyset.userCertSet.6.default.params.keyUsageNonRepudiation=true policyset.userCertSet.6.default.params.keyUsageDataEncipherment=false policyset.userCertSet.6.default.params.keyUsageKeyEncipherment=true policyset.userCertSet.6.default.params.keyUsageKeyAgreement=false policyset.userCertSet.6.default.params.keyUsageKeyCertSign=false policyset.userCertSet.6.default.params.keyUsageCrlSign=false policyset.userCertSet.6.default.params.keyUsageEncipherOnly=false policyset.userCertSet.6.default.params.keyUsageDecipherOnly=false",
"policyset.caCertSet.5.default.name=Basic Constraints Extension Default policyset.caCertSet.5.default.params.basicConstraintsCritical=true policyset.caCertSet.5.default.params.basicConstraintsIsCA=true policyset.caCertSet.5.default.params.basicConstraintsPathLen=-1",
"policyset.caCertSet.5.constraint.class_id=basicConstraintsExtConstraintImpl policyset.caCertSet.5.constraint.name=Basic Constraint Extension Constraint policyset.caCertSet.5.constraint.params.basicConstraintsCritical=true policyset.caCertSet.5.constraint.params.basicConstraintsIsCA=true policyset.caCertSet.5.constraint.params.basicConstraintsMinPathLen=-1 policyset.caCertSet.5.constraint.params.basicConstraintsMaxPathLen=-1",
"input.i1.class_id=cmcCertReqInputImpl",
"output.o1.class_id=CertOutputImpl",
"pki -c password -n caagent ca-profile-disable caCMCECserverCert",
"pki -c password -n caagent ca-profile-enable caCMCECserverCert",
"pki -c password -n caadmin ca-profile-add profile_name.cfg --raw",
"profileId=profile_name",
"pki -c password -n caadmin ca-profile-edit caCMCECserverCert",
"pki -c password -n caagemt ca-profile-disable caCMCserverCert",
"pki -c password -n caadmin ca-profile-edit caCMCserverCert",
"pki -c password -n caagent ca-profile-enable caCMCserverCert",
"pki -c password -n caadmin ca-profile-del profile_name",
"pki -c password -n caadmin ca-profile-find ------------------- 59 entries matched ------------------- Profile ID: caCMCserverCert Name: Server Certificate Enrollment using CMC Description: This certificate profile is for enrolling server certificates using CMC. Profile ID: caCMCECserverCert Name: Server Certificate wth ECC keys Enrollment using CMC Description: This certificate profile is for enrolling server certificates with ECC keys using CMC. Profile ID: caCMCECsubsystemCert Name: Subsystem Certificate Enrollment with ECC keys using CMC Description: This certificate profile is for enrolling subsystem certificates with ECC keys using CMC. Profile ID: caCMCsubsystemCert Name: Subsystem Certificate Enrollment using CMC Description: This certificate profile is for enrolling subsystem certificates using CMC. ----------------------------------- Number of entries returned 20",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert ----------------------------------- Profile \"caECFullCMCUserSignedCert\" ----------------------------------- Profile ID: caECFullCMCUserSignedCert Name: User-Signed CMC-Authenticated User Certificate Enrollment Description: This certificate profile is for enrolling user certificates with EC keys by using the CMC certificate request with non-agent user CMC authentication. Name: Certificate Request Input Class: cmcCertReqInputImpl Attribute Name: cert_request Attribute Description: Certificate Request Attribute Syntax: cert_request Name: Certificate Output Class: certOutputImpl Attribute Name: pretty_cert Attribute Description: Certificate Pretty Print Attribute Syntax: pretty_print Attribute Name: b64_cert Attribute Description: Certificate Base-64 Encoded Attribute Syntax: pretty_print",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert --raw #Wed Jul 25 14:41:35 PDT 2018 auth.instance_id=CMCUserSignedAuth policyset.cmcUserCertSet.1.default.params.name= policyset.cmcUserCertSet.4.default.class_id=authorityKeyIdentifierExtDefaultImpl policyset.cmcUserCertSet.6.default.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.10.default.class_id=noDefaultImpl policyset.cmcUserCertSet.10.constraint.name=Renewal Grace Period Constraint output.o1.class_id=certOutputImpl",
"policyset.set1.p3.constraint.class_id=noConstraintImpl policyset.set1.p3.constraint.name=No Constraint policyset.set1.p3.default.class_id=subjectKeyIdentifierExtDefaultImpl policyset.set1.p3.default.name=Subject Key Identifier Default policyset.set1.p11.constraint.class_id=keyConstraintImpl policyset.set1.p11.constraint.name=Key Constraint policyset.set1.p11.constraint.params.keyType=RSA policyset.set1.p11.constraint.params.keyParameters=1024,2048,3072,4096 policyset.set1.p11.default.class_id=userKeyDefaultImpl policyset.set1.p11.default.name=Key Default",
"policyset.set1.list=p1,p2,p11,p3,p4,p5,p6,p7,p8,p9,p10",
"policyset.cmcUserCertSet.10.constraint.class_id=renewGracePeriodConstraintImpl policyset.cmcUserCertSet.10.constraint.name=Renewal Grace Period Constraint policyset.cmcUserCertSet.10.constraint.params.renewal.graceBefore=30 policyset.cmcUserCertSet.10.constraint.params.renewal.graceAfter=30 policyset.cmcUserCertSet.10.default.class_id=noDefaultImpl policyset.cmcUserCertSet.10.default.name=No Default",
"policyset.cmcUserCertSet.9.constraint.class_id=uniqueKeyConstraintImpl policyset.cmcUserCertSet.9.constraint.name=Unique Key Constraint policyset.cmcUserCertSet.9.constraint.params.allowSameKeyRenewal=true policyset.cmcUserCertSet.9.default.class_id=noDefaultImpl policyset.cmcUserCertSet.9.default.name=No Default",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"dbs.beginRequestNumber=1 dbs.beginSerialNumber=1 dbs.enableSerialManagement=true dbs.endRequestNumber=9980000 dbs.endSerialNumber=ffe0000 dbs.ldap=internaldb dbs.newSchemaEntryAdded=true dbs.replicaCloneTransferNumber=5",
"policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=CN=USDrequest.req_subject_name.cnUSD,DC=example, DC=com",
"policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=UID=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=UID=USDrequest.req_subject_name.uidUSD,DC=example, DC=com",
"policyset.userCertSet.8.default.class_id=subjectAltNameExtDefaultImpl policyset.userCertSet.8.default.name=Subject Alt Name Constraint policyset.userCertSet.8.default.params.subjAltNameExtCritical=false policyset.userCertSet.8.default.params.subjAltExtType_0=RFC822Name policyset.userCertSet.8.default.params.subjAltExtPattern_0=USDrequest.requestor_emailUSD policyset.userCertSet.8.default.params.subjAltExtGNEnable_0=true",
"pkiconsole -d nssdb -n 'optional client cert nickname' https://server.example.com:8443/ca",
"Authentication InstanceID=SharedToken shrTokAttr=shrTok ldap.ldapconn.host=server.example.com ldap.ldapconn.port=636 ldap.ldapconn.secureConn=true ldap.ldapauth.bindDN=cn=Directory Manager password=password ldap.ldapauth.authtype=BasicAuth ldap.basedn=ou=People,dc=example,dc=org",
"policyset.setID.8.default.params. subjAltExtPattern_0=USDrequest.auth_token.mail[0]USD",
"systemctl restart pki-tomcatd-nuxwdog@instance_name.service",
"Identifier: Subject Alternative Name - 2.5.29.17 Critical: no Value: RFC822Name: [email protected]",
"pki -c password -d /administrator_nssdb_directory/ -p 8443 -n administrator_cert_nickname ca-profile-disable profile_name",
"pki -c password -d /administrator_nssdb_directory/ -p 8443 -n administrator_cert_nickname ca-profile-edit profile_name",
"policyset.serverCertSet.12.constraint.class_id=noConstraintImpl policyset.serverCertSet.12.constraint.name=No Constraint policyset.serverCertSet.12.default.class_id=commonNameToSANDefaultImpl policyset.serverCertSet.12.default.name=Copy Common Name to Subject",
"policyset.userCertSet.list=1,10,2,3,4,5,6,7,8,9,12",
"pki -c password -d /administrator_nssdb_directory/ -p 8443 -n administrator_nickname ca-profile-enable profile_name",
"prefix.constraint.class_id=noConstraintImpl prefix.constraint.name=No Constraint prefix.default.class_id=userExtensionDefaultImpl prefix.default.name=User supplied extension in CSR prefix.default.params.userExtOID=2.5.29.17",
"certutil -R -k ec -q nistp256 -d . -s \"cn=Example Multiple SANs\" --extSAN dns:www.example.com,dns:www.example.org -a -o request.csr.p10"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/certificate_profiles |
Chapter 10. Troubleshooting | Chapter 10. Troubleshooting The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues. 10.1. Collecting diagnostic data from the command line When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as OpenTelemetryCollector , Instrumentation , and the created resources like Deployment , Pod , or ConfigMap . The oc adm must-gather tool creates a new pod that collects this data. Procedure From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data: USD oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1 1 The default namespace where the Operator is installed is openshift-opentelemetry-operator . Verification Verify that the new directory is created and contains the collected data. 10.2. Getting the OpenTelemetry Collector logs You can get the logs for the OpenTelemetry Collector as follows. Procedure Set the relevant log level in the OpenTelemetryCollector custom resource (CR): config: service: telemetry: logs: level: debug 1 1 Collector's log level. Supported values include info , warn , error , or debug . Defaults to info . Use the oc logs command or the web console to retrieve the logs. 10.3. Exposing the metrics The OpenTelemetry Collector exposes the metrics about the data volumes it has processed. The following metrics are for spans, although similar metrics are exposed for metrics and logs signals: otelcol_receiver_accepted_spans The number of spans successfully pushed into the pipeline. otelcol_receiver_refused_spans The number of spans that could not be pushed into the pipeline. otelcol_exporter_sent_spans The number of spans successfully sent to the destination. otelcol_exporter_enqueue_failed_spans The number of spans failed to be added to the sending queue. The Operator creates a <cr_name>-collector-monitoring telemetry service that you can use to scrape the metrics endpoint. Procedure Enable the telemetry service by adding the following lines in the OpenTelemetryCollector custom resource (CR): # ... config: service: telemetry: metrics: address: ":8888" 1 # ... 1 The address at which the internal collector metrics are exposed. Defaults to :8888 . Retrieve the metrics by running the following command, which uses the port-forwarding Collector pod: USD oc port-forward <collector_pod> In the OpenTelemetryCollector CR, set the enableMetrics field to true to scrape internal metrics: apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: # ... mode: deployment observability: metrics: enableMetrics: true # ... Depending on the deployment mode of the OpenTelemetry Collector, the internal metrics are scraped by using PodMonitors or ServiceMonitors . Note Alternatively, if you do not set the enableMetrics field to true , you can access the metrics endpoint at http://localhost:8888/metrics . On the Observe page in the web console, enable User Workload Monitoring to visualize the scraped metrics. Note Not all processors expose the required metrics. In the web console, go to Observe Dashboards and select the OpenTelemetry Collector dashboard from the drop-down list to view it. Tip You can filter the visualized data such as spans or metrics by the Collector instance, namespace, or OpenTelemetry components such as processors, receivers, or exporters. 10.4. Debug Exporter You can configure the Debug Exporter to export the collected data to the standard output. Procedure Configure the OpenTelemetryCollector custom resource as follows: config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug] Use the oc logs command or the web console to export the logs to the standard output. 10.5. Using the Network Observability Operator for troubleshooting You can debug the traffic between your observability components by visualizing it with the Network Observability Operator. Prerequisites You have installed the Network Observability Operator as explained in "Installing the Network Observability Operator". Procedure In the OpenShift Container Platform web console, go to Observe Network Traffic Topology . Select Namespace to filter the workloads by the namespace in which your OpenTelemetry Collector is deployed. Use the network traffic visuals to troubleshoot possible issues. See "Observing the network traffic from the Topology view" for more details. Additional resources Installing the Network Observability Operator Observing the network traffic from the Topology view 10.6. Troubleshooting the instrumentation To troubleshoot the instrumentation, look for any of the following issues: Issues with instrumentation injection into your workload Issues with data generation by the instrumentation libraries 10.6.1. Troubleshooting instrumentation injection into your workload To troubleshoot instrumentation injection, you can perform the following activities: Checking if the Instrumentation object was created Checking if the init-container started Checking if the resources were deployed in the correct order Searching for errors in the Operator logs Double-checking the pod annotations Procedure Run the following command to verify that the Instrumentation object was successfully created: USD oc get instrumentation -n <workload_project> 1 1 The namespace where the instrumentation was created. Run the following command to verify that the opentelemetry-auto-instrumentation init-container successfully started, which is a prerequisite for instrumentation injection into workloads: USD oc get events -n <workload_project> 1 1 The namespace where the instrumentation is injected for workloads. Example output ... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation Verify that the resources were deployed in the correct order for the auto-instrumentation to work correctly. The correct order is to deploy the Instrumentation custom resource (CR) before the application. For information about the Instrumentation CR, see the section "Configuring the instrumentation". Note When the pod starts, the Red Hat build of OpenTelemetry Operator checks the Instrumentation CR for annotations containing instructions for injecting auto-instrumentation. Generally, the Operator then adds an init-container to the application's pod that injects the auto-instrumentation and environment variables into the application's container. If the Instrumentation CR is not available to the Operator when the application is deployed, the Operator is unable to inject the auto-instrumentation. Fixing the order of deployment requires the following steps: Update the instrumentation settings. Delete the instrumentation object. Redeploy the application. Run the following command to inspect the Operator logs for instrumentation errors: USD oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow Troubleshoot pod annotations for the instrumentations for a specific programming language. See the required annotation fields and values in "Configuring the instrumentation". Verify that the application pods that you are instrumenting are labeled with correct annotations and the appropriate auto-instrumentation settings have been applied. Example Example command to get pod annotations for an instrumented Python application USD oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations["instrumentation.opentelemetry.io/inject-python"]=="true")]}{.metadata.name}{"\n"}{end}' Verify that the annotation applied to the instrumentation object is correct for the programming language that you are instrumenting. If there are multiple instrumentations in the same namespace, specify the name of the Instrumentation object in their annotations. Example If the Instrumentation object is in a different namespace, specify the namespace in the annotation. Example Verify that the OpenTelemetryCollector custom resource specifies the auto-instrumentation annotations under spec.template.metadata.annotations . If the auto-instrumentation annotations are in spec.metadata.annotations instead, move them into spec.template.metadata.annotations . 10.6.2. Troubleshooting telemetry data generation by the instrumentation libraries You can troubleshoot telemetry data generation by the instrumentation libraries by checking the endpoint, looking for errors in your application logs, and verifying that the Collector is receiving the telemetry data. Procedure Verify that the instrumentation is transmitting data to the correct endpoint: USD oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}' The default endpoint http://localhost:4317 for the Instrumentation object is only applicable to a Collector instance that is deployed as a sidecar in your application pod. If you are using an incorrect endpoint, correct it by editing the Instrumentation object and redeploying your application. Inspect your application logs for error messages that might indicate that the instrumentation is malfunctioning: USD oc logs <application_pod> -n <workload_project> If the application logs contain error messages that indicate that the instrumentation might be malfunctioning, install the OpenTelemetry SDK and libraries locally. Then run your application locally and troubleshoot for issues between the instrumentation libraries and your application without OpenShift Container Platform. Use the Debug Exporter to verify that the telemetry data is reaching the destination OpenTelemetry Collector instance. For more information, see "Debug Exporter". | [
"oc adm must-gather --image=ghcr.io/open-telemetry/opentelemetry-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"config: service: telemetry: logs: level: debug 1",
"config: service: telemetry: metrics: address: \":8888\" 1",
"oc port-forward <collector_pod>",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true",
"config: exporters: debug: verbosity: detailed service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] logs: exporters: [debug]",
"oc get instrumentation -n <workload_project> 1",
"oc get events -n <workload_project> 1",
"... Created container opentelemetry-auto-instrumentation ... Started container opentelemetry-auto-instrumentation",
"oc logs -l app.kubernetes.io/name=opentelemetry-operator --container manager -n openshift-opentelemetry-operator --follow",
"instrumentation.opentelemetry.io/inject-python=\"true\"",
"oc get pods -n <workload_project> -o jsonpath='{range .items[?(@.metadata.annotations[\"instrumentation.opentelemetry.io/inject-python\"]==\"true\")]}{.metadata.name}{\"\\n\"}{end}'",
"instrumentation.opentelemetry.io/inject-nodejs: \"<instrumentation_object>\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"<other_namespace>/<instrumentation_object>\"",
"oc get instrumentation <instrumentation_name> -n <workload_project> -o jsonpath='{.spec.endpoint}'",
"oc logs <application_pod> -n <workload_project>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/otel-troubleshoot |
Chapter 1. Understanding API tiers | Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/api_overview/understanding-api-support-tiers |
1.2. What Are Software Collections? | 1.2. What Are Software Collections? With Software Collections, you can build and concurrently install multiple versions of the same software components on your system. Software Collections have no impact on the system versions of the packages installed by any of the conventional RPM package management utilities. Software Collections: Do not overwrite system files Software Collections are distributed as a set of several components, which provide their full functionality without overwriting system files. Are designed to avoid conflicts with system files Software Collections make use of a special file system hierarchy to avoid possible conflicts between a single Software Collection and the base system installation. Require no changes to the RPM package manager Software Collections require no changes to the RPM package manager present on the host system. Need only minor changes to the spec file To convert a conventional package to a single Software Collection, you only need to make minor changes to the package spec file. Allow you to build a conventional package and a Software Collection package with a single spec file With a single spec file, you can build both the conventional package and the Software Collection package. Uniquely name all included packages With Software Collection's namespace, all packages included in the Software Collection are uniquely named. Do not conflict with updated packages Software Collection's namespace ensures that updating packages on your system causes no conflicts. Can depend on other Software Collections Because one Software Collection can depend on another, you can define multiple levels of dependencies. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-what_are_software_collections |
probe::nfs.fop.release | probe::nfs.fop.release Name probe::nfs.fop.release - NFS client release page operation Synopsis nfs.fop.release Values ino inode number dev device identifier mode file mode | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-release |
Chapter 143. KafkaConnectorSpec schema reference | Chapter 143. KafkaConnectorSpec schema reference Used in: KafkaConnector Property Property type Description class string The Class for the Kafka Connector. tasksMax integer The maximum number of tasks for the Kafka Connector. autoRestart AutoRestart Automatic restart of connector and tasks configuration. config map The Kafka Connector configuration. The following properties cannot be set: name, connector.class, tasks.max. pause boolean The pause property has been deprecated. Deprecated in Streams for Apache Kafka 2.6, use state instead. Whether the connector should be paused. Defaults to false. state string (one of [running, paused, stopped]) The state the connector should be in. Defaults to running. listOffsets ListOffsets Configuration for listing offsets. alterOffsets AlterOffsets Configuration for altering offsets. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaConnectorSpec-reference |
Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service | Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service Red Hat Developer Hub 1.2 Red Hat Customer Content Services | [
"securityContext: fsGroup: 300",
"db-statefulset.yaml: | spec.template.spec deployment.yaml: | spec.template.spec",
"apply -f rhdh-operator-<VERSION>.yaml",
"-n <your_namespace> create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<redhat_user_name> --docker-password=<redhat_password> --docker-email=<email>",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rhdh-ingress namespace: <your_namespace> spec: ingressClassName: webapprouting.kubernetes.azure.com rules: - http: paths: - path: / pathType: Prefix backend: service: name: backstage-<your-CR-name> port: name: http-backend",
"-n <your_namespace> apply -f rhdh-ingress.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<app_address> backend: auth: keys: - secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<app_address> cors: origin: https://<app_address>",
"apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: BACKEND_SECRET: \"xxx\"",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: \"secrets-rhdh\"",
"-n <your_namespace> apply -f rhdh.yaml",
"-n <your_namespace> delete -f rhdh.yaml",
"az aks approuting enable --resource-group <your_ResourceGroup> --name <your_ClusterName>",
"az extension add --upgrade -n aks-preview --allow-preview true",
"get svc nginx --namespace app-routing-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'",
"create namespace <your_namespace>",
"az login [--tenant=<optional_directory_name>]",
"az group create --name <resource_group_name> --location <location>",
"az account list-locations -o table",
"az aks create --resource-group <resource_group_name> --name <cluster_name> --enable-managed-identity --generate-ssh-keys",
"az aks get-credentials --resource-group <resource_group_name> --name <cluster_name>",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"DEPLOYMENT_NAME=<redhat-developer-hub> NAMESPACE=<rhdh> create namespace USD{NAMESPACE} config set-context --current --namespace=USD{NAMESPACE}",
"-n USDNAMESPACE create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<redhat_user_name> --docker-password=<redhat_password> --docker-email=<email>",
"global: host: <app_address> route: enabled: false upstream: ingress: enabled: true className: webapprouting.kubernetes.azure.com host: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 3000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true fsGroup: 3000 volumePermissions: enabled: true",
"helm -n USDNAMESPACE install -f values.yaml USDDEPLOYMENT_NAME openshift-helm-charts/redhat-developer-hub --version 1.2.6",
"get deploy USDDEPLOYMENT_NAME -n USDNAMESPACE",
"PASSWORD=USD(kubectl get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(kubectl get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade USDDEPLOYMENT_NAME -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"",
"echo \"https://USDDEPLOYMENT_NAME-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\"",
"helm upgrade USDDEPLOYMENT_NAME -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.2.6/redhat-developer-hub-1.2.6.tgz",
"helm -n USDNAMESPACE delete USDDEPLOYMENT_NAME"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html-single/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/index.xml |
Chapter 1. About CI/CD | Chapter 1. About CI/CD OpenShift Container Platform is an enterprise-ready Kubernetes platform for developers, which enables organizations to automate the application delivery process through DevOps practices, such as continuous integration (CI) and continuous delivery (CD). To meet your organizational needs, the OpenShift Container Platform provides the following CI/CD solutions: OpenShift Builds OpenShift Pipelines OpenShift GitOps Jenkins 1.1. OpenShift Builds OpenShift Builds provides you the following options to configure and run a build: Builds using Shipwright is an extensible build framework based on the Shipwright project. You can use it to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfile by using image build tools, such as Source-to-Image (S2I) and Buildah. For more information, see builds for Red Hat OpenShift . Builds using BuildConfig objects is a declarative build process to create cloud-native apps. You can define the build process in a YAML file that you use to create a BuildConfig object. This definition includes attributes such as build triggers, input parameters, and source code. When deployed, the BuildConfig object builds a runnable image and pushes the image to a container image registry. With the BuildConfig object, you can create a Docker, Source-to-image (S2I), or custom build. For more information, see Understanding image builds . 1.2. OpenShift Pipelines OpenShift Pipelines provides a Kubernetes-native CI/CD framework to design and run each step of the CI/CD pipeline in its own container. It can scale independently to meet the on-demand pipelines with predictable outcomes. For more information, see Red Hat OpenShift Pipelines . 1.3. OpenShift GitOps OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. For more information, see Red Hat OpenShift GitOps . 1.4. Jenkins Jenkins automates the process of building, testing, and deploying applications and projects. OpenShift Developer Tools provides a Jenkins image that integrates directly with the OpenShift Container Platform. Jenkins can be deployed on OpenShift by using the Samples Operator templates or certified Helm chart. For more information, see Configuring Jenkins images . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cicd_overview/ci-cd-overview |
Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] | Chapter 3. ConsoleExternalLogLink [console.openshift.io/v1] Description ConsoleExternalLogLink is an extension for customizing OpenShift web console log links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. 3.1.1. .spec Description ConsoleExternalLogLinkSpec is the desired log link configuration. The log link will appear on the logs tab of the pod details page. Type object Required hrefTemplate text Property Type Description hrefTemplate string hrefTemplate is an absolute secure URL (must use https) for the log link including variables to be replaced. Variables are specified in the URL with the format USD{variableName}, for instance, USD{containerName} and will be replaced with the corresponding values from the resource. Resource is a pod. Supported variables are: - USD{resourceName} - name of the resource which containes the logs - USD{resourceUID} - UID of the resource which contains the logs - e.g. 11111111-2222-3333-4444-555555555555 - USD{containerName} - name of the resource's container that contains the logs - USD{resourceNamespace} - namespace of the resource that contains the logs - USD{resourceNamespaceUID} - namespace UID of the resource that contains the logs - USD{podLabels} - JSON representation of labels matching the pod with the logs - e.g. {"key1":"value1","key2":"value2"} e.g., https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} namespaceFilter string namespaceFilter is a regular expression used to restrict a log link to a matching set of namespaces (e.g., ^openshift- ). The string is converted into a regular expression using the JavaScript RegExp constructor. If not specified, links will be displayed for all the namespaces. text string text is the display text for the link 3.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleexternalloglinks DELETE : delete collection of ConsoleExternalLogLink GET : list objects of kind ConsoleExternalLogLink POST : create a ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name} DELETE : delete a ConsoleExternalLogLink GET : read the specified ConsoleExternalLogLink PATCH : partially update the specified ConsoleExternalLogLink PUT : replace the specified ConsoleExternalLogLink /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status GET : read status of the specified ConsoleExternalLogLink PATCH : partially update status of the specified ConsoleExternalLogLink PUT : replace status of the specified ConsoleExternalLogLink 3.2.1. /apis/console.openshift.io/v1/consoleexternalloglinks HTTP method DELETE Description delete collection of ConsoleExternalLogLink Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleExternalLogLink Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleExternalLogLink Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 202 - Accepted ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.2. /apis/console.openshift.io/v1/consoleexternalloglinks/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method DELETE Description delete a ConsoleExternalLogLink Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleExternalLogLink Table 3.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleExternalLogLink Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleExternalLogLink Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty 3.2.3. /apis/console.openshift.io/v1/consoleexternalloglinks/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the ConsoleExternalLogLink HTTP method GET Description read status of the specified ConsoleExternalLogLink Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleExternalLogLink Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleExternalLogLink Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body ConsoleExternalLogLink schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleExternalLogLink schema 201 - Created ConsoleExternalLogLink schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/console_apis/consoleexternalloglink-console-openshift-io-v1 |
Chapter 16. Creating assets | Chapter 16. Creating assets You can create business processes, rules, DRL files, and other assets in your Business Central projects. Note Migrating business processes is an irreversible process. Procedure In Business Central, go to Menu Design Projects and click the project name. For example, Evaluation . Click Add Asset and select the asset type. In the Create new asset_type window, add the required information and click Ok . Figure 16.1. Define Asset Note If you have not created a project, you can either add a project, use a sample project, or import an existing project. For more information, see Managing projects in Business Central . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/creating_assets_proc_managing-assets |
Chapter 1. About Red Hat OpenShift GitOps | Chapter 1. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat {OCP} and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Dedicated, the Red Hat OpenShift GitOps documentation is now available as a separate documentation set at Red Hat OpenShift GitOps . | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/gitops/about-redhat-openshift-gitops |
22.5. Ethernet Parameters | 22.5. Ethernet Parameters Important Most modern Ethernet-based network interface cards (NICs), do not require module parameters to alter settings. Instead, they can be configured using ethtool or mii-tool . Only after these tools fail to work should module parameters be adjusted. Module paramaters can be viewed using the modinfo command. Note For information about using these tools, consult the man pages for ethtool , mii-tool , and modinfo . Table 22.2. Ethernet Module Parameters Hardware Module Parameters 3Com EtherLink PCI III/XL Vortex (3c590, 3c592, 3c595, 3c597) Boomerang (3c900, 3c905, 3c595) 3c59x.ko debug - 3c59x debug level (0-6) options - 3c59x: Bits 0-3: media type, bit 4: bus mastering, bit 9: full duplex global_options - 3c59x: same as options, but applies to all NICs if options is unset full_duplex - 3c59x full duplex setting(s) (1) global_full_duplex - 3c59x: same as full_duplex, but applies to all NICs if full_duplex is unset hw_checksums - 3c59x Hardware checksum checking by adapter(s) (0-1) flow_ctrl - 3c59x 802.3x flow control usage (PAUSE only) (0-1) enable_wol - 3c59x: Turn on Wake-on-LAN for adapter(s) (0-1) global_enable_wol - 3c59x: same as enable_wol, but applies to all NICs if enable_wol is unset rx_copybreak - 3c59x copy breakpoint for copy-only-tiny-frames max_interrupt_work - 3c59x maximum events handled per interrupt compaq_ioaddr - 3c59x PCI I/O base address (Compaq BIOS problem workaround) compaq_irq - 3c59x PCI IRQ number (Compaq BIOS problem workaround) compaq_device_id - 3c59x PCI device ID (Compaq BIOS problem workaround) watchdog - 3c59x transmit timeout in milliseconds global_use_mmio - 3c59x: same as use_mmio, but applies to all NICs if options is unset use_mmio - 3c59x: use memory-mapped PCI I/O resource (0-1) RTL8139, SMC EZ Card Fast Ethernet, RealTek cards using RTL8129, or RTL8139 Fast Ethernet chipsets 8139too.ko Broadcom 4400 10/100 PCI ethernet driver b44.ko b44_debug - B44 bitmapped debugging message enable value Broadcom NetXtreme II BCM5706/5708 Driver bnx2.ko disable_msi - Disable Message Signaled Interrupt (MSI) Intel Ether Express/100 driver e100.ko debug - Debug level (0=none,...,16=all) eeprom_bad_csum_allow - Allow bad eeprom checksums Intel EtherExpress/1000 Gigabit e1000.ko TxDescriptors - Number of transmit descriptors RxDescriptors - Number of receive descriptors Speed - Speed setting Duplex - Duplex setting AutoNeg - Advertised auto-negotiation setting FlowControl - Flow Control setting XsumRX - Disable or enable Receive Checksum offload TxIntDelay - Transmit Interrupt Delay TxAbsIntDelay - Transmit Absolute Interrupt Delay RxIntDelay - Receive Interrupt Delay RxAbsIntDelay - Receive Absolute Interrupt Delay InterruptThrottleRate - Interrupt Throttling Rate SmartPowerDownEnable - Enable PHY smart power down KumeranLockLoss - Enable Kumeran lock loss workaround Myricom 10G driver (10GbE) myri10ge.ko myri10ge_fw_name - Firmware image name myri10ge_ecrc_enable - Enable Extended CRC on PCI-E myri10ge_max_intr_slots - Interrupt queue slots myri10ge_small_bytes - Threshold of small packets myri10ge_msi - Enable Message Signalled Interrupts myri10ge_intr_coal_delay - Interrupt coalescing delay myri10ge_flow_control - Pause parameter myri10ge_deassert_wait - Wait when deasserting legacy interrupts myri10ge_force_firmware - Force firmware to assume aligned completions myri10ge_skb_cross_4k - Can a small skb cross a 4KB boundary? myri10ge_initial_mtu - Initial MTU myri10ge_napi_weight - Set NAPI weight myri10ge_watchdog_timeout - Set watchdog timeout myri10ge_max_irq_loops - Set stuck legacy IRQ detection threshold NatSemi DP83815 Fast Ethernet natsemi.ko mtu - DP8381x MTU (all boards) debug - DP8381x default debug level rx_copybreak - DP8381x copy breakpoint for copy-only-tiny-frames options - DP8381x: Bits 0-3: media type, bit 17: full duplex full_duplex - DP8381x full duplex setting(s) (1) AMD PCnet32 and AMD PCnetPCI pcnet32.ko PCnet32 and PCnetPCI pcnet32.ko debug - pcnet32 debug level max_interrupt_work - pcnet32 maximum events handled per interrupt rx_copybreak - pcnet32 copy breakpoint for copy-only-tiny-frames tx_start_pt - pcnet32 transmit start point (0-3) pcnet32vlb - pcnet32 Vesa local bus (VLB) support (0/1) options - pcnet32 initial option setting(s) (0-15) full_duplex - pcnet32 full duplex setting(s) (1) homepna - pcnet32 mode for 79C978 cards (1 for HomePNA, 0 for Ethernet, default Ethernet RealTek RTL-8169 Gigabit Ethernet driver r8169.ko media - force phy operation. Deprecated by ethtool (8). rx_copybreak - Copy breakpoint for copy-only-tiny-frames use_dac - Enable PCI DAC. Unsafe on 32 bit PCI slot. debug - Debug verbosity level (0=none, ..., 16=all) Neterion Xframe 10GbE Server Adapter s2io.ko SIS 900/701G PCI Fast Ethernet sis900.ko multicast_filter_limit - SiS 900/7016 maximum number of filtered multicast addresses max_interrupt_work - SiS 900/7016 maximum events handled per interrupt sis900_debug - SiS 900/7016 bitmapped debugging message level Adaptec Starfire Ethernet driver starfire.ko max_interrupt_work - Maximum events handled per interrupt mtu - MTU (all boards) debug - Debug level (0-6) rx_copybreak - Copy breakpoint for copy-only-tiny-frames intr_latency - Maximum interrupt latency, in microseconds small_frames - Maximum size of receive frames that bypass interrupt latency (0,64,128,256,512) options - Deprecated: Bits 0-3: media type, bit 17: full duplex full_duplex - Deprecated: Forced full-duplex setting (0/1) enable_hw_cksum - Enable/disable hardware cksum support (0/1) Broadcom Tigon3 tg3.ko tg3_debug - Tigon3 bitmapped debugging message enable value ThunderLAN PCI tlan.ko aui - ThunderLAN use AUI port(s) (0-1) duplex - ThunderLAN duplex setting(s) (0-default, 1-half, 2-full) speed - ThunderLAN port speen setting(s) (0,10,100) debug - ThunderLAN debug mask bbuf - ThunderLAN use big buffer (0-1) Digital 21x4x Tulip PCI Ethernet cards SMC EtherPower 10 PCI(8432T/8432BT) SMC EtherPower 10/100 PCI(9332DST) DEC EtherWorks 100/10 PCI(DE500-XA) DEC EtherWorks 10 PCI(DE450) DEC QSILVER's, Znyx 312 etherarray Allied Telesis LA100PCI-T Danpex EN-9400, Cogent EM110 tulip.ko io io_port VIA Rhine PCI Fast Ethernet cards with either the VIA VT86c100A Rhine-II PCI or 3043 Rhine-I D-Link DFE-930-TX PCI 10/100 via-rhine.ko max_interrupt_work - VIA Rhine maximum events handled per interrupt debug - VIA Rhine debug level (0-7) rx_copybreak - VIA Rhine copy breakpoint for copy-only-tiny-frames avoid_D3 - Avoid power state D3 (work-around for broken BIOSes) 22.5.1. Using Multiple Ethernet Cards It is possible to use multiple Ethernet cards on a single machine. For each card there must be an alias and, possibly, options lines for each card in /etc/modprobe.conf . For additional information about using multiple Ethernet cards, refer to the Linux Ethernet-HOWTO online at http://www.redhat.com/mirrors/LDP/HOWTO/Ethernet-HOWTO.html . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-modules-ethernet |
Appendix H. Ceph debugging and logging configuration options | Appendix H. Ceph debugging and logging configuration options Logging and debugging settings are not required in a Ceph configuration file, but you can override default settings as needed. The options take a single item that is assumed to be the default for all daemons regardless of channel. For example, specifying "info" is interpreted as "default=info". However, options can also take key/value pairs. For example, "default=daemon audit=local0" is interpreted as "default all to 'daemon', override 'audit' with 'local0'." log_file Description The location of the logging file for the cluster. Type String Required No Default /var/log/ceph/USDcluster-USDname.log mon_cluster_log_file Description The location of the monitor cluster's log file. Type String Required No Default /var/log/ceph/USDcluster.log log_max_new Description The maximum number of new log files. Type Integer Required No Default 1000 log_max_recent Description The maximum number of recent events to include in a log file. Type Integer Required No Default 10000 log_flush_on_exit Description Determines if Ceph flushes the log files after exit. Type Boolean Required No Default true mon_cluster_log_file_level Description The level of file logging for the monitor cluster. Valid settings include "debug", "info", "sec", "warn", and "error". Type String Default "info" log_to_stderr Description Determines if logging messages appear in stderr . Type Boolean Required No Default true err_to_stderr Description Determines if error messages appear in stderr . Type Boolean Required No Default true log_to_syslog Description Determines if logging messages appear in syslog . Type Boolean Required No Default false err_to_syslog Description Determines if error messages appear in syslog . Type Boolean Required No Default false clog_to_syslog Description Determines if clog messages will be sent to syslog . Type Boolean Required No Default false mon_cluster_log_to_syslog Description Determines if the cluster log will be output to syslog . Type Boolean Required No Default false mon_cluster_log_to_syslog_level Description The level of syslog logging for the monitor cluster. Valid settings include "debug", "info", "sec", "warn", and "error". Type String Default "info" mon_cluster_log_to_syslog_facility Description The facility generating the syslog output. This is usually set to "daemon" for the Ceph daemons. Type String Default "daemon" clog_to_monitors Description Determines if clog messages will be sent to monitors. Type Boolean Required No Default true mon_cluster_log_to_graylog Description Determines if the cluster will output log messages to graylog. Type String Default "false" mon_cluster_log_to_graylog_host Description The IP address of the graylog host. If the graylog host is different from the monitor host, override this setting with the appropriate IP address. Type String Default "127.0.0.1" mon_cluster_log_to_graylog_port Description Graylog logs will be sent to this port. Ensure the port is open for receiving data. Type String Default "12201" osd_preserve_trimmed_log Description Preserves trimmed logs after trimming. Type Boolean Required No Default false osd_tmapput_sets_uses_tmap Description Uses tmap . For debug only. Type Boolean Required No Default false osd_min_pg_log_entries Description The minimum number of log entries for placement groups. Type 32-bit Unsigned Integer Required No Default 1000 osd_op_log_threshold Description How many op log messages to show up in one pass. Type Integer Required No Default 5 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-debugging-and-logging-configuration-options_conf |
Chapter 6. New Packages | Chapter 6. New Packages 6.1. RHEA-2015:1420 - new packages: clufter New clufter packages are now available for Red Hat Enterprise Linux 6. The clufter packages contain a tool for transforming and analyzing cluster configuration formats. Notably, clufter can be used to assist with migration from an older stack configuration to a newer one that leverages Pacemaker. The packages can be used either as a separate command-line tool or as a Python library. This enhancement update adds the clufter packages to Red Hat Enterprise Linux 6. (BZ# 1182358 ) All users who require clufter are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/ch06 |
Viewing and managing system inventory with FedRAMP | Viewing and managing system inventory with FedRAMP Red Hat Insights 1-latest Using workspaces to organize system inventory and manage User Access to groups of systems Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory_with_fedramp/index |
25.7.2. Exporting Messages to a Database | 25.7.2. Exporting Messages to a Database Processing of log data can be faster and more convenient when performed in a database rather than with text files. Based on the type of DBMS used, choose from various output modules such as ommysql , ompgsql , omoracle , or ommongodb . As an alternative, use the generic omlibdbi output module that relies on the libdbi library. The omlibdbi module supports database systems Firebird/Interbase, MS SQL, Sybase, SQLite, Ingres, Oracle, mSQL, MySQL, and PostgreSQL. Example 25.16. Exporting Rsyslog Messages to a Database To store the rsyslog messages in a MySQL database, add the following into /etc/rsyslog.conf : First, the output module is loaded, then the communication port is specified. Additional information, such as name of the server and the database, and authentication data, is specified on the last line of the above example. | [
"USDModLoad ommysql USDActionOmmysqlServerPort 1234 *.* :ommysql:database-server,database-name,database-userid,database-password"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-exporting_messages_to_a_database |
Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform | Chapter 1. Release notes for the Red Hat OpenShift distributed tracing platform 1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis You can use the Red Hat OpenShift distributed tracing platform (Tempo) in combination with the Red Hat build of OpenTelemetry . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat OpenShift distributed tracing platform 3.5 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.2.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is provided through the Tempo Operator 0.15.3 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.5 is based on the open source Grafana Tempo 2.7.1. 1.2.1.1. New features and enhancements This update introduces the following enhancements: With this update, you can configure the Tempo backend services to report the internal tracing data by using the OpenTelemetry Protocol (OTLP). With this update, the traces.span.metrics namespace becomes the default metrics namespace on which the Jaeger query retrieves the Prometheus metrics. The purpose of this change is to provide compatibility with the OpenTelemetry Collector version 0.109.0 and later where this namespace is the default. Customers who are still using an earlier OpenTelemetry Collector version can configure this namespace by adding the following field and value: spec.template.queryFrontend.jaegerQuery.monitorTab.redMetricsNamespace: "" . 1.2.1.2. Bug fixes This update introduces the following bug fix: Before this update, the Tempo Operator failed when the TempoStack custom resource had the spec.storage.tls.enabled field set to true and used an Amazon S3 object store with the Security Token Service (STS) authentication. With this update, such a TempoStack custom resource configuration does not cause the Tempo Operator to fail. 1.2.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Warning Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see the following resources: Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (distributed tracing platform (Tempo) documentation) Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase solution) Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is based on the open source Jaeger release 1.65.0. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.65.0 . Important Jaeger does not use FIPS validated cryptographic modules. 1.2.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.2.2.2. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.3. Release notes for Red Hat OpenShift distributed tracing platform 3.4 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.3.1. CVEs This release fixes the following CVEs: CVE-2024-21536 CVE-2024-43796 CVE-2024-43799 CVE-2024-43800 CVE-2024-45296 CVE-2024-45590 CVE-2024-45811 CVE-2024-45812 CVE-2024-47068 Cross-site Scripting (XSS) in serialize-javascript 1.3.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is provided through the Tempo Operator 0.14.1 . Note The Red Hat OpenShift distributed tracing platform (Tempo) 3.4 is based on the open source Grafana Tempo 2.6.1. 1.3.2.1. New features and enhancements This update introduces the following enhancements: The monitor tab in the Jaeger UI for TempoStack instances uses a new default metrics namespace: traces.span.metrics . Before this update, the Jaeger UI used an empty namespace. The new traces.span.metrics namespace default is also used by the OpenTelemetry Collector 0.113.0. You can set the empty value for the metrics namespace by using the following field in the TempoStack custom resouce: spec.template.queryFrontend.monitorTab.redMetricsNamespace: "" . Warning This is a breaking change. If you are using both the Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry, you must upgrade to the Red Hat build of OpenTelemetry 3.4 before upgrading to the Red Hat OpenShift distributed tracing platform (Tempo) 3.4. New and optional spec.timeout field in the TempoStack and TempoMonolithic custom resource definitions for configuring one timeout value for all components. The timeout value is set to 30 seconds, 30s , by default. Warning This is a breaking change. 1.3.2.2. Bug fixes This update introduces the following bug fixes: Before this update, the distributed tracing platform (Tempo) failed on the IBM Z ( s390x ) architecture. With this update, the distributed tracing platform (Tempo) is available for the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Before this update, the distributed tracing platform (Tempo) failed on clusters with non-private networks. With this update, you can deploy the distributed tracing platform (Tempo) on clusters with non-private networks. ( TRACING-4507 ) Before this update, the Jaeger UI might fail due to reaching a trace quantity limit, resulting in the 504 Gateway Timeout error in the tempo-query logs. After this update, the issue is resolved by introducing two optional fields in the tempostack or tempomonolithic custom resource: New spec.timeout field for configuring the timeout. New spec.template.queryFrontend.jaegerQuery.findTracesConcurrentRequests field for improving the query performance of the Jaeger UI. Tip One querier can handle up to 20 concurrent queries by default. Increasing the number of concurrent queries further is achieved by scaling up the querier instances. 1.3.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is provided through the Red Hat OpenShift distributed tracing platform Operator 1.62.0 . Note The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is based on the open source Jaeger release 1.62.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.3.3.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.4 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.3.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.4, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see Migrating in the Red Hat build of OpenTelemetry documentation, Installing in the Red Hat build of OpenTelemetry documentation, and Installing in the distributed tracing platform (Tempo) documentation. Additional resources Jaeger Deprecation and Removal in OpenShift (Red Hat Knowledgebase) Migrating (Red Hat build of OpenTelemetry documentation) Installing (Red Hat build of OpenTelemetry documentation) Installing (Distributed tracing platform (Tempo) documentation) 1.3.3.3. Bug fixes This update introduces the following bug fix: Before this update, the Jaeger UI could fail with the 502 - Bad Gateway Timeout error. After this update, you can configure timeout in ingress annotations. ( TRACING-4238 ) 1.3.3.4. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.4. Release notes for Red Hat OpenShift distributed tracing platform 3.3.1 The Red Hat OpenShift distributed tracing platform 3.3.1 is a maintenance release with no changes because the Red Hat OpenShift distributed tracing platform is bundled with the Red Hat build of OpenTelemetry that is released with a bug fix. This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.4.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3.1 is based on the open source Grafana Tempo 2.5.0. 1.4.1.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.4.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.4.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.4.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.4.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.5. Release notes for Red Hat OpenShift distributed tracing platform 3.3 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.5.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. The Red Hat OpenShift distributed tracing platform (Tempo) 3.3 is based on the open source Grafana Tempo 2.5.0. 1.5.1.1. New features and enhancements This update introduces the following enhancements: Support for securing the Jaeger UI and Jaeger APIs with the OpenShift OAuth Proxy. ( TRACING-4108 ) Support for using the service serving certificates, which are generated by OpenShift Container Platform, on ingestion APIs when multitenancy is disabled. ( TRACING-3954 ) Support for ingesting by using the OTLP/HTTP protocol when multitenancy is enabled. ( TRACING-4171 ) Support for the AWS S3 Secure Token authentication. ( TRACING-4176 ) Support for automatically reloading certificates. ( TRACING-4185 ) Support for configuring the duration for which service names are available for querying. ( TRACING-4214 ) 1.5.1.2. Bug fixes This update introduces the following bug fixes: Before this update, storage certificate names did not support dots. With this update, storage certificate name can contain dots. ( TRACING-4348 ) Before this update, some users had to select a certificate when accessing the gateway route. With this update, there is no prompt to select a certificate. ( TRACING-4431 ) Before this update, the gateway component was not scalable. With this update, the gateway component is scalable. ( TRACING-4497 ) Before this update the Jaeger UI might fail with the 504 Gateway Time-out error when accessed via a route. With this update, users can specify route annotations for increasing timeout, such as haproxy.router.openshift.io/timeout: 3m , when querying large data sets. ( TRACING-4511 ) 1.5.1.3. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.5.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is based on the open source Jaeger release 1.57.0. Important Jaeger does not use FIPS validated cryptographic modules. 1.5.2.1. Support for the OpenShift Elasticsearch Operator The Red Hat OpenShift distributed tracing platform (Jaeger) 3.3 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.5.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.3, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. 1.5.2.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.6. Release notes for Red Hat OpenShift distributed tracing platform 3.2.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.6.2.1. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4434 ) 1.6.2.2. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.6.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.6.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.7. Release notes for Red Hat OpenShift distributed tracing platform 3.2.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.7.1. CVEs This release fixes CVE-2024-25062 . 1.7.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.7.2.1. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.7.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.7.3.1. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.8. Release notes for Red Hat OpenShift distributed tracing platform 3.2 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.8.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.8.1.1. Technology Preview features This update introduces the following Technology Preview feature: Support for the Tempo monolithic deployment. Important The Tempo monolithic deployment is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.1.2. New features and enhancements This update introduces the following enhancements: Red Hat OpenShift distributed tracing platform (Tempo) 3.2 is based on the open source Grafana Tempo 2.4.1. Allowing the overriding of resources per component. 1.8.1.3. Bug fixes This update introduces the following bug fixes: Before this update, the Jaeger UI only displayed services that sent traces in the 15 minutes. With this update, the availability of the service and operation names can be configured by using the following field: spec.template.queryFrontend.jaegerQuery.servicesQueryDuration . ( TRACING-3139 ) Before this update, the query-frontend pod might get stopped when out-of-memory (OOM) as a result of searching a large trace. With this update, resource limits can be set to prevent this issue. ( TRACING-4009 ) 1.8.1.4. Known issues There is currently a known issue: Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.8.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.8.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.8.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.2, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. The Tempo Operator and the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. Users must adopt the OpenTelemetry and Tempo distributed tracing stack because it is the stack to be enhanced going forward. In the Red Hat OpenShift distributed tracing platform 3.2, the Jaeger agent is deprecated and planned to be removed in the following release. Red Hat will provide bug fixes and support for the Jaeger agent during the current release lifecycle, but the Jaeger agent will no longer receive enhancements and will be removed. The OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry is the preferred Operator for injecting the trace collector agent. 1.8.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.2 is based on the open source Jaeger release 1.57.0. 1.8.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.9. Release notes for Red Hat OpenShift distributed tracing platform 3.1.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.9.2. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.9.2.1. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.9.3. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.9.3.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.9.3.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.9.3.3. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.10. Release notes for Red Hat OpenShift distributed tracing platform 3.1 This release of the Red Hat OpenShift distributed tracing platform includes the Red Hat OpenShift distributed tracing platform (Tempo) and the deprecated Red Hat OpenShift distributed tracing platform (Jaeger). 1.10.1. Red Hat OpenShift distributed tracing platform (Tempo) The Red Hat OpenShift distributed tracing platform (Tempo) is provided through the Tempo Operator. 1.10.1.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Red Hat OpenShift distributed tracing platform (Tempo) 3.1 is based on the open source Grafana Tempo 2.3.1. Support for cluster-wide proxy environments. Support for TraceQL to Gateway component. 1.10.1.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, when a TempoStack instance was created with the monitorTab enabled in OpenShift Container Platform 4.15, the required tempo-redmetrics-cluster-monitoring-view ClusterRoleBinding was not created. This update resolves the issue by fixing the Operator RBAC for the monitor tab when the Operator is deployed in an arbitrary namespace. ( TRACING-3786 ) Before this update, when a TempoStack instance was created on an OpenShift Container Platform cluster with only an IPv6 networking stack, the compactor and ingestor pods ran in the CrashLoopBackOff state, resulting in multiple errors. This update provides support for IPv6 clusters.( TRACING-3226 ) 1.10.1.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.10.2. Red Hat OpenShift distributed tracing platform (Jaeger) The Red Hat OpenShift distributed tracing platform (Jaeger) is provided through the Red Hat OpenShift distributed tracing platform Operator. Important Jaeger does not use FIPS validated cryptographic modules. 1.10.2.1. Support for OpenShift Elasticsearch Operator Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is supported for use with the OpenShift Elasticsearch Operator 5.6, 5.7, and 5.8. 1.10.2.2. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.1, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.10.2.3. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Red Hat OpenShift distributed tracing platform (Jaeger) 3.1 is based on the open source Jaeger release 1.53.0. 1.10.2.4. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the connection target URL for the jaeger-agent container in the jager-query pod was overwritten with another namespace URL in OpenShift Container Platform 4.13. This was caused by a bug in the sidecar injection code in the jaeger-operator , causing nondeterministic jaeger-agent injection. With this update, the Operator prioritizes the Jaeger instance from the same namespace as the target deployment. ( TRACING-3722 ) 1.10.2.5. Known issues There are currently known issues: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11. Release notes for Red Hat OpenShift distributed tracing platform 3.0 1.11.1. Component versions in the Red Hat OpenShift distributed tracing platform 3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.51.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.3.0 1.11.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.11.2.1. Deprecated functionality In the Red Hat OpenShift distributed tracing platform 3.0, Jaeger and support for Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. In the Red Hat OpenShift distributed tracing platform 3.0, Tempo provided by the Tempo Operator and the OpenTelemetry Collector provided by the Red Hat build of OpenTelemetry are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward. 1.11.2.2. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Jaeger): Support for the ARM architecture. Support for cluster-wide proxy environments. 1.11.2.3. Bug fixes This update introduces the following bug fix for the distributed tracing platform (Jaeger): Before this update, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator used other images than relatedImages . This caused the ImagePullBackOff error in disconnected network environments when launching the jaeger pod because the oc adm catalog mirror command mirrors images specified in relatedImages . This update provides support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3546 ) 1.11.2.4. Known issues There is currently a known issue: Currently, Apache Spark is not supported. Currently, the streaming deployment via AMQ/Kafka is not supported on the IBM Z and IBM Power architectures. 1.11.3. Red Hat OpenShift distributed tracing platform (Tempo) 1.11.3.1. New features and enhancements This update introduces the following enhancements for the distributed tracing platform (Tempo): Support for the ARM architecture. Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the Observe menu. 1.11.3.2. Bug fixes This update introduces the following bug fixes for the distributed tracing platform (Tempo): Before this update, the TempoStack CRD was not accepting custom CA certificate despite the option to choose CA certificates. This update fixes support for the custom TLS CA option for connecting to object storage. ( TRACING-3462 ) Before this update, when mirroring the Red Hat OpenShift distributed tracing platform Operator images to a mirror registry for use in a disconnected cluster, the related Operator images for tempo , tempo-gateway , opa-openshift , and tempo-query were not mirrored. This update fixes support for disconnected environments when using the oc adm catalog mirror CLI command. ( TRACING-3523 ) Before this update, the query frontend service of the Red Hat OpenShift distributed tracing platform was using internal mTLS when gateway was not deployed. This caused endpoint failure errors. This update fixes mTLS when Gateway is not deployed. ( TRACING-3510 ) 1.11.3.3. Known issues There are currently known issues: Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) 1.12. Release notes for Red Hat OpenShift distributed tracing platform 2.9.2 1.12.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.2 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.12.2. CVEs This release fixes CVE-2023-46234 . 1.12.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.12.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.12.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.12.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.13. Release notes for Red Hat OpenShift distributed tracing platform 2.9.1 1.13.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.13.2. CVEs This release fixes CVE-2023-44487 . 1.13.3. Red Hat OpenShift distributed tracing platform (Jaeger) 1.13.3.1. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.13.4. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.13.4.1. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.14. Release notes for Red Hat OpenShift distributed tracing platform 2.9 1.14.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.9 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.47.0 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 2.1.1 1.14.2. Red Hat OpenShift distributed tracing platform (Jaeger) 1.14.2.1. Bug fixes Before this update, connection was refused due to a missing gRPC port on the jaeger-query deployment. This issue resulted in transport: Error while dialing: dial tcp :16685: connect: connection refused error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. ( TRACING-3322 ) Before this update, the wrong port was exposed for jaeger-production-query , resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. ( TRACING-2968 ) Before this update, when deploying Service Mesh on single-node OpenShift clusters in disconnected environments, the Jaeger pod frequently went into the Pending state. With this update, the issue is fixed. ( TRACING-3312 ) Before this update, the Jaeger Operator pod restarted with the default memory value due to the reason: OOMKilled error message. With this update, this issue is fixed by removing the resource limits. ( TRACING-3173 ) 1.14.2.2. Known issues There are currently known issues: Apache Spark is not supported. The streaming deployment via AMQ/Kafka is unsupported on the IBM Z and IBM Power architectures. 1.14.3. Red Hat OpenShift distributed tracing platform (Tempo) Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.14.3.1. New features and enhancements This release introduces the following enhancements for the distributed tracing platform (Tempo): Support the operator maturity Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of the TempoStack instances and the Tempo Operator. Add Ingress and Route configuration for the Gateway. Support the managed and unmanaged states in the TempoStack custom resource. Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled. Expose the Jaeger Query gRPC endpoint on the Query Frontend service. Support multitenancy without Gateway authentication and authorization. 1.14.3.2. Bug fixes Before this update, the Tempo Operator was not compatible with disconnected environments. With this update, the Tempo Operator supports disconnected environments. ( TRACING-3145 ) Before this update, the Tempo Operator with TLS failed to start on OpenShift Container Platform. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. ( TRACING-3091 ) Before this update, the resource limits from the Tempo Operator caused error messages such as reason: OOMKilled . With this update, the resource limits for the Tempo Operator are removed to avoid such errors. ( TRACING-3204 ) 1.14.3.3. Known issues There are currently known issues: Currently, the custom TLS CA option is not implemented for connecting to object storage. ( TRACING-3462 ) Currently, when used with the Tempo Operator, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. ( TRACING-3139 ) Currently, the distributed tracing platform (Tempo) fails on the IBM Z ( s390x ) architecture. ( TRACING-3545 ) Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. ( TRACING-3510 ) Workaround Disable mTLS as follows: Open the Tempo Operator ConfigMap for editing by running the following command: USD oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1 1 The project where the Tempo Operator is installed. Disable the mTLS in the Operator configuration by updating the YAML file: data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false Restart the Tempo Operator pod by running the following command: USD oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator Missing images for running the Tempo Operator in restricted environments. The Red Hat OpenShift distributed tracing platform (Tempo) CSV is missing references to the operand images. ( TRACING-3523 ) Workaround Add the Tempo Operator related images in the mirroring tool to mirror the images to the registry: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e 1.15. Release notes for Red Hat OpenShift distributed tracing platform 2.8 1.15.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.8 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.42 Red Hat OpenShift distributed tracing platform (Tempo) Tempo 0.1.0 1.15.2. Technology Preview features This release introduces support for the Red Hat OpenShift distributed tracing platform (Tempo) as a Technology Preview feature for Red Hat OpenShift distributed tracing platform. Important The Red Hat OpenShift distributed tracing platform (Tempo) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The feature uses version 0.1.0 of the Red Hat OpenShift distributed tracing platform (Tempo) and version 2.0.1 of the upstream distributed tracing platform (Tempo) components. You can use the distributed tracing platform (Tempo) to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch. Most users who use the distributed tracing platform (Tempo) instead of Jaeger will not notice any difference in functionality because the distributed tracing platform (Tempo) supports the same ingestion and query protocols as Jaeger and uses the same user interface. If you enable this Technology Preview feature, note the following limitations of the current implementation: The distributed tracing platform (Tempo) currently does not support disconnected installations. ( TRACING-3145 ) When you use the Jaeger user interface (UI) with the distributed tracing platform (Tempo), the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. ( TRACING-3139 ) Expanded support for the Tempo Operator is planned for future releases of the Red Hat OpenShift distributed tracing platform. Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters. For more information about the Tempo Operator, see the Tempo community documentation . 1.15.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat OpenShift distributed tracing platform 2.7 1.16.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.7 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.39 1.16.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat OpenShift distributed tracing platform 2.6 1.17.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.6 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.38 1.17.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat OpenShift distributed tracing platform 2.5 1.18.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.5 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.36 1.18.2. New features and enhancements This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the Red Hat OpenShift distributed tracing platform (Jaeger) Operator. The Operator now automatically enables the OTLP ports: Port 4317 for the OTLP gRPC protocol. Port 4318 for the OTLP HTTP protocol. This release also adds support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat OpenShift distributed tracing platform 2.4 1.19.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.4 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.34.1 1.19.2. New features and enhancements This release adds support for auto-provisioning certificates using the OpenShift Elasticsearch Operator. Self-provisioning by using the Red Hat OpenShift distributed tracing platform (Jaeger) Operator to call the OpenShift Elasticsearch Operator during installation. + Important When upgrading to the Red Hat OpenShift distributed tracing platform 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period. 1.19.3. Technology Preview features Creating the Elasticsearch instance and certificates first and then configuring the distributed tracing platform (Jaeger) to use the certificate is a Technology Preview for this release. 1.19.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat OpenShift distributed tracing platform 2.3 1.20.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.2 1.20.2. Component versions in the Red Hat OpenShift distributed tracing platform 2.3.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.30.1 1.20.3. New features and enhancements With this release, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator is now installed to the openshift-distributed-tracing namespace by default. Before this update, the default installation had been in the openshift-operators namespace. 1.20.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat OpenShift distributed tracing platform 2.2 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release of the Red Hat OpenShift distributed tracing platform addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat OpenShift distributed tracing platform 2.1 1.22.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.1 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.29.1 1.22.2. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.3. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat OpenShift distributed tracing platform 2.0 1.23.1. Component versions in the Red Hat OpenShift distributed tracing platform 2.0 Operator Component Version Red Hat OpenShift distributed tracing platform (Jaeger) Jaeger 1.28.0 1.23.2. New features and enhancements This release introduces the following new features and enhancements: Rebrands Red Hat OpenShift Jaeger as the Red Hat OpenShift distributed tracing platform. Updates Red Hat OpenShift distributed tracing platform (Jaeger) Operator to Jaeger 1.28. Going forward, the Red Hat OpenShift distributed tracing platform will only support the stable Operator channel. Channels for individual releases are no longer supported. Adds support for OpenTelemetry protocol (OTLP) to the Query service. Introduces a new distributed tracing icon that appears in the OperatorHub. Includes rolling updates to the documentation to support the name change and new features. 1.23.3. Technology Preview features This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat OpenShift distributed tracing platform. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.23.4. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/distributed_tracing/distr-tracing-rn |
Chapter 1. New features and enhancements | Chapter 1. New features and enhancements Red Hat JBoss Core Services 2.4.57 does not include any new features or enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_release_notes/new_features_and_enhancements |
6.5. Installing Red Hat Directory Server | 6.5. Installing Red Hat Directory Server Certificate System uses Red Hat Directory Server to store system certificates and user data. You can install both Directory Server and Certificate System on the same or any other host in the network. Important FIPS mode must be enabled on the RHEL host before you install Directory Server. To ensure FIPS mode is enabled: If the returned value is 1 , FIPS mode is enabled. 6.5.1. Preparing a Directory Server Instance for Certificate System Perform the following steps to install Red Hat Directory Server: Make sure you have attached a subscription that provides Directory Server to the host. Enable the Directory Server repository: Install the Directory Server and the openldap-clients packages: Set up a Directory Server instance. Generate a DS configuration file; for example, /tmp/ds-setup.inf : Customize the DS configuration file as follows: Create the instance using the dscreate command with the setup configuration file: For a detailed procedure, see the Red Hat Directory Server Installation Guide . 6.5.2. Preparing for Configuring Certificate System In Section 7.3, "Understanding the pkispawn Utility" , if you chose to set up TLS between Certificate System and Directory Server, use the following parameters in the configuration file you pass to the pkispawn utility when installing Certificate System: Note We need to first create a basic TLS server authentication connection. At the end, during post-installation, we will return and make the connection require a client authentication certificate to be presented to Directory Server. At that time, once client authentication is set up, the pki_ds_password would no longer be relevant. The value of the pki_ds_database parameter is a name used by the pkispawn utility to create the corresponding subsystem database on the Directory Server instance. The value of the pki_ds_hostname parameter depends on the install location of the Directory Server instance. This depends on the values used in Section 6.5.1, "Preparing a Directory Server Instance for Certificate System" . When you set pki_ds_secure_connection=True , the following parameters must be set: pki_ds_secure_connection_ca_pem_file : Sets the fully-qualified path including the file name of the file which contains an exported copy of the Directory Server's CA certificate. This file must exist prior to pkispawn being able to utilize it. pki_ds_ldaps_port : Sets the value of the secure LDAPS port Directory Server is listening to. The default is 636 . | [
"sysctl crypto.fips_enabled",
"subscription-manager repos --enable=dirsrv-11.7-for-rhel-8-x86_64-rpms",
"dnf module install redhat-ds",
"dnf install openldap-clients",
"dscreate create-template /tmp/ds-setup.inf",
"sed -i -e \"s/;instance_name = .*/instance_name = localhost/g\" -e \"s/;root_password = .*/root_password = Secret.123/g\" -e \"s/;suffix = .*/suffix = dc=example,dc=com/g\" -e \"s/;create_suffix_entry = .*/create_suffix_entry = True/g\" -e \"s/;self_sign_cert = .*/self_sign_cert = False/g\" /tmp/ds-setup.inf",
"dscreate from-file /tmp/ds-setup.inf",
"pki_ds_database= back_end_database_name pki_ds_hostname= host_name pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file= path_to_CA_or_self-signed_certificate pki_ds_password= password pki_ds_ldaps_port= port pki_ds_bind_dn=cn=Directory Manager"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/installing_rhds |
2.4. Scalar Functions | 2.4. Scalar Functions 2.4.1. Scalar Functions JBoss Data Virtualization provides an extensive set of built-in scalar functions. See Section 2.1, "SQL Support" and Section 3.1, "Supported Types" . In addition, JBoss Data Virtualization provides the capability for user defined functions or UDFs. See Red Hat JBoss Development Guide: Server Development for adding UDFs. Once added, UDFs may be called like any other function. 2.4.2. Numeric Functions Numeric functions return numeric values (integer, long, float, double, biginteger, bigdecimal). They generally take numeric values as inputs, though some take strings. Table 2.1. Numeric Functions Function Definition Data Type Constraint + - * / Standard numeric operators x in {integer, long, float, double, biginteger, bigdecimal}, return type is same as x Note The precision and scale of non-bigdecimal arithmetic function functions results matches that of Java. The results of bigdecimal operations match Java, except for division, which uses a preferred scale of max(16, dividend.scale + divisor.precision + 1), which then has trailing zeros removed by setting the scale to max(dividend.scale, normalized scale). ABS(x) Absolute value of x See standard numeric operators above ACOS(x) Arc cosine of x x in {double, bigdecimal}, return type is double ASIN(x) Arc sine of x x in {double, bigdecimal}, return type is double ATAN(x) Arc tangent of x x in {double, bigdecimal}, return type is double ATAN2(x,y) Arc tangent of x and y x, y in {double, bigdecimal}, return type is double CEILING(x) Ceiling of x x in {double, float}, return type is double COS(x) Cosine of x x in {double, bigdecimal}, return type is double COT(x) Cotangent of x x in {double, bigdecimal}, return type is double DEGREES(x) Convert x degrees to radians x in {double, bigdecimal}, return type is double EXP(x) e^x x in {double, float}, return type is double FLOOR(x) Floor of x x in {double, float}, return type is double FORMATBIGDECIMAL(x, y) Formats x using format y x is bigdecimal, y is string, returns string FORMATBIGINTEGER(x, y) Formats x using format y x is biginteger, y is string, returns string FORMATDOUBLE(x, y) Formats x using format y x is double, y is string, returns string FORMATFLOAT(x, y) Formats x using format y x is float, y is string, returns string FORMATINTEGER(x, y) Formats x using format y x is integer, y is string, returns string FORMATLONG(x, y) Formats x using format y x is long, y is string, returns string LOG(x) Natural log of x (base e) x in {double, float}, return type is double LOG10(x) Log of x (base 10) x in {double, float}, return type is double MOD(x, y) Modulus (remainder of x / y) x in {integer, long, float, double, biginteger, bigdecimal}, return type is same as x PARSEBIGDECIMAL(x, y) Parses x using format y x, y are strings, returns bigdecimal PARSEBIGINTEGER(x, y) Parses x using format y x, y are strings, returns biginteger PARSEDOUBLE(x, y) Parses x using format y x, y are strings, returns double PARSEFLOAT(x, y) Parses x using format y x, y are strings, returns float PARSEINTEGER(x, y) Parses x using format y x, y are strings, returns integer PARSELONG(x, y) Parses x using format y x, y are strings, returns long PI() Value of Pi return is double POWER(x,y) x to the y power x in {double, bigdecimal, biginteger}, return is the same type as x RADIANS(x) Convert x radians to degrees x in {double, bigdecimal}, return type is double RAND() Returns a random number, using generator established so far in the query or initializing with system clock if necessary. Returns double. RAND(x) Returns a random number, using new generator seeded with x. x is integer, returns double. ROUND(x,y) Round x to y places; negative values of y indicate places to the left of the decimal point x in {integer, float, double, bigdecimal} y is integer, return is same type as x SIGN(x) 1 if x > 0, 0 if x = 0, -1 if x < 0 x in {integer, long, float, double, biginteger, bigdecimal}, return type is integer SIN(x) Sine value of x x in {double, bigdecimal}, return type is double SQRT(x) Square root of x x in {long, double, bigdecimal}, return type is double TAN(x) Tangent of x x in {double, bigdecimal}, return type is double BITAND(x, y) Bitwise AND of x and y x, y in {integer}, return type is integer BITOR(x, y) Bitwise OR of x and y x, y in {integer}, return type is integer BITXOR(x, y) Bitwise XOR of x and y x, y in {integer}, return type is integer BITNOT(x) Bitwise NOT of x x in {integer}, return type is integer 2.4.3. Parsing Numeric Data Types from Strings JBoss Data Virtualization provides a set of functions to parse formatted strings as various numeric data types: parseDouble - parses a string as a double parseFloat - parses a string as a float parseLong - parses a string as a long parseInteger - parses a string as an integer For each function, you have to provide the formatting of the string. The formatting follows the convention established by the java.text.DecimalFormat class. See examples below. Input String Function Call to Format String Output Value Output Data Type 'USD25.30' parseDouble(cost, 'USD#,##0.00;(USD#,##0.00)') 25.3 double '25%' parseFloat(percent, '#,##0%') 25 float '2,534.1' parseFloat(total, '#,##0.###;-#,##0.###') 2534.1 float '1.234E3' parseLong(amt, '0.###E0') 1234 long '1,234,567' parseInteger(total, '#,##0;-#,##0') 1234567 integer Note See http://download.oracle.com/javase/6/docs/api/java/text/DecimalFormat.html for more information. 2.4.4. Formatting Numeric Data Types as Strings JBoss Data Virtualization provides a set of functions to convert numeric data types into formatted strings: formatDouble - formats a double as a string formatFloat - formats a float as a string formatLong - formats a long as a string formatInteger - formats an integer as a string For each function, you have to provide the formatting of the string. The formatting follows the convention established by the java.text.DecimalFormat class. See examples below. Input Value Input Data Type Function Call to Format String Output String 25.3 double formatDouble(cost, 'USD#,##0.00;(USD#,##0.00)') 'USD25.30' 25 float formatFloat(percent, '#,##0%') '25%' 2534.1 float formatFloat(total, '#,##0.###;-#,##0.###') '2,534.1' 1234 long formatLong(amt, '0.###E0') '1.234E3' 1234567 integer formatInteger(total, '#,##0;-#,##0') '1,234,567' Note See http://download.oracle.com/javase/6/docs/api/java/text/DecimalFormat.html for more information. 2.4.5. String Functions String functions generally take strings as inputs and return strings as outputs. Unless specified, all of the arguments and return types in the following table are strings and all indexes are one-based. The zero index is considered to be before the start of the string. Important Non-ASCII range characters or integers used by ASCII(x) , CHR(x) , and CHAR(x) may produce different results or exceptions depending on where the function is evaluated (JBoss Data Virtualization vs. source). JBoss Data Virtualization uses Java default int to char and char to int conversions, which operates over UTF16 values. Table 2.2. String Functions Function Definition DataType Constraint x || y Concatenation operator x,y in {string}, return type is string ASCII(x) Provide ASCII value of the left most character in x. The empty string will return null. return type is integer CHR(x) CHAR(x) Provide the character for ASCII value x x in {integer} CONCAT(x, y) Concatenates x and y with ANSI semantics. If x and/or y is null, returns null. x, y in {string} CONCAT2(x, y) Concatenates x and y with non-ANSI null semantics. If x and y is null, returns null. If only x or y is null, returns the other value. x, y in {string} ENDSWITH(x, y) Checks if y ends with x. If only x or y is null, returns null. x, y in {string}, returns boolean INITCAP(x) Make first letter of each word in string x capital and all others lowercase x in {string} INSERT(str1, start, length, str2) Insert string2 into string1 str1 in {string}, start in {integer}, length in {integer}, str2 in {string} LCASE(x) Lowercase of x x in {string} LEFT(x, y) Get left y characters of x x in {string}, y in {integer}, return string LENGTH(x) Length of x return type is integer LOCATE(x, y) Find position of x in y starting at beginning of y x in {string}, y in {string}, return integer LOCATE(x, y, z) Find position of x in y starting at z x in {string}, y in {string}, z in {integer}, return integer LPAD(x, y) Pad input string x with spaces on the left to the length of y x in {string}, y in {integer}, return string LPAD(x, y, z) Pad input string x on the left to the length of y using character z x in {string}, y in {string}, z in {character}, return string LTRIM(x) Left trim x of blank characters x in {string}, return string QUERYSTRING(path [, expr [AS name] ...]) Returns a properly encoded query string appended to the given path. Null valued expressions are omitted, and a null path is treated as ''. Names are optional for column reference expressions. e.g. QUERYSTRING('path', 'value' as "&x", ' & ' as y, null as z) returns 'path?%26x=value&y=%20%26%20' path, expr in {string}. name is an identifier REPEAT(str1,instances) Repeat string1 a specified number of times str1 in {string}, instances in {integer} return string RIGHT(x, y) Get right y characters of x x in {string}, y in {integer}, return string RPAD(input string x, pad length y) Pad input string x with spaces on the right to the length of y x in {string}, y in {integer}, return string RPAD(x, y, z) Pad input string x on the right to the length of y using character z x in {string}, y in {string}, z in {character}, return string RTRIM(x) Right trim x of blank characters x is string, return string SPACE(x) Repeats space x times x in {integer} SUBSTRING(x, y) SUBSTRING(x FROM y) Get substring from x, from position y to the end of x y in {integer} SUBSTRING(x, y, z) SUBSTRING(x FROM y FOR z) Get substring from x from position y with length z y, z in {integer} TO_CHARS(x, encoding) Return a CLOB from the BLOB with the given encoding. BASE64, HEX, and the built-in Java Charset names are valid values for the encoding. Note For charsets, unmappable chars will be replaced with the charset default character. Binary formats, such as BASE64, will error in their conversion to bytes if an unrecognizable character is encountered. x is a BLOB, encoding is a string, and returns a CLOB TO_BYTES(x, encoding) Return a BLOB from the CLOB with the given encoding. BASE64, HEX, and the builtin Java Charset names are valid values for the encoding. x in a CLOB, encoding is a string, and returns a BLOB TRANSLATE(x, y, z) Translate string x by replacing each character in y with the character in z at the same position. Note that the second arg (y) and the third arg (z) must be the same length. If they are not equal, Red Hat JBoss data Virtualization throws this exception: 'TEIID30404 Source and destination character lists must be the same length.' x in {string} TRIM([[LEADING|TRAILING|BOTH] [x] FROM] y) Trim character x from the leading, trailing, or both ends of string y. If LEADING/TRAILING/BOTH is not specified, BOTH is used by default. If no trim character x is specified, a blank space ' ' is used for x by default. x in {character}, y in {string} UCASE(x) Uppercase of x x in {string} UNESCAPE(x) Unescaped version of x. Possible escape sequences are \b - backspace, \t - tab, \n - line feed, \f - form feed, \r - carriage return. \uXXXX, where X is a hex value, can be used to specify any unicode character. \XXX, where X is an octal digit, can be used to specify an octal byte value. If any other character appears after an escape character, that character will appear in the output and the escape character will be ignored. x in {string} 2.4.5.1. Replacement Functions Use REPLACE to replace all occurrences of a given string with another: This will replace all occurrences of y with z in x. (x, y, z are strings and the return value is a string.) REGEXP_REPLACE replaces one or all occurrences of a given pattern with another string: This will replace one or more occurrences of pattern with sub in str. All arguments are strings and the return value is a string. The pattern parameter is expected to be a valid Java regular expression. The flags argument can be any concatenation of any of the valid flags with the following meanings: Table 2.3. Flags Flag Name Meaning g global Replace all occurrences, not just the first. m multiline Match over multiple lines. i case insensitive Match without case sensitivity. Here is how you return xxbye Wxx using the global and case insensitive options: 2.4.6. Date/Time Functions Date and time functions return or operate on dates, times, or timestamps. Parse and format Date/Time functions use the convention established within the java.text.SimpleDateFormat class to define the formats you can use with these functions. Table 2.4. Date and Time Functions Function Definition Datatype Constraint CURDATE() Return current date returns date CURTIME() Return current time returns time NOW() Return current timestamp (date and time) returns timestamp DAYNAME(x) Return name of day in the default locale x in {date, timestamp}, returns string DAYOFMONTH(x) Return day of month x in {date, timestamp}, returns integer DAYOFWEEK(x) Return day of week (Sunday=1, Saturday=7) x in {date, timestamp}, returns integer DAYOFYEAR(x) Return day number x in {date, timestamp}, returns integer EXTRACT(YEAR|MONTH|DAY|HOUR|MINUTE|SECOND FROM x) Return the given field value from the date value x. Produces the same result as the associated YEAR, MONTH, DAYOFMONTH, HOUR, MINUTE, SECOND functions. The SQL specification also allows for TIMEZONE_HOUR and TIMEZONE_MINUTE as extraction targets. In JBoss Data Virtualization, all date values are in the timezone of the server. x in {date, time, timestamp}, returns integer FORMATDATE(x, y) Format date x using format y x is date, y is string, returns string FORMATTIME(x, y) Format time x using format y x is time, y is string, returns string FORMATTIMESTAMP(x, y) Format timestamp x using format y x is timestamp, y is string, returns string FROM_UNIXTIME (unix_timestamp) Return the Unix timestamp (in seconds) as a Timestamp value Unix timestamp (in seconds) HOUR(x) Return hour (in military 24-hour format) x in {time, timestamp}, returns integer MINUTE(x) Return minute x in {time, timestamp}, returns integer MODIFYTIMEZONE (timestamp, startTimeZone, endTimeZone) Returns a timestamp based upon the incoming timestamp adjusted for the differential between the start and end time zones. i.e. if the server is in GMT-6, then modifytimezone({ts '2006-01-10 04:00:00.0'},'GMT-7', 'GMT-8') will return the timestamp {ts '2006-01-10 05:00:00.0'} as read in GMT-6. The value has been adjusted 1 hour ahead to compensate for the difference between GMT-7 and GMT-8. startTimeZone and endTimeZone are strings, returns a timestamp MODIFYTIMEZONE (timestamp, endTimeZone) Return a timestamp in the same manner as modifytimezone(timestamp, startTimeZone, endTimeZone), but will assume that the startTimeZone is the same as the server process. Timestamp is a timestamp; endTimeZone is a string, returns a timestamp MONTH(x) Return month x in {date, timestamp}, returns integer MONTHNAME(x) Return name of month in the default locale x in {date, timestamp}, returns string PARSEDATE(x, y) Parse date from x using format y x, y in {string}, returns date PARSETIME(x, y) Parse time from x using format y x, y in {string}, returns time PARSETIMESTAMP(x,y) Parse timestamp from x using format y x, y in {string}, returns timestamp QUARTER(x) Return quarter x in {date, timestamp}, returns integer SECOND(x) Return seconds x in {time, timestamp}, returns integer TIMESTAMPCREATE(date, time) Create a timestamp from a date and time date in {date}, time in {time}, returns timestamp TIMESTAMPADD(interval, count, timestamp) Add a specified interval (hour, day of week, month) to the timestamp, where intervals can be: SQL_TSI_FRAC_SECOND - fractional seconds (billionths of a second) SQL_TSI_SECOND - seconds SQL_TSI_MINUTE - minutes SQL_TSI_HOUR - hours SQL_TSI_DAY - days SQL_TSI_WEEK - weeks where Sunday is the first day SQL_TSI_MONTH - months SQL_TSI_QUARTER - quarters (3 months), where the first quarter is months 1-3 SQL_TSI_YEAR - years Note The full interval amount based upon calendar fields will be added. For example adding 1 QUARTER will move the timestamp up by three full months and not just to the start of the calendar quarter. The interval constant may be specified either as a string literal or a constant value. Interval in {string}, count in {integer}, timestamp in {date, time, timestamp} TIMESTAMPDIFF(interval, startTime, endTime) Calculates the date part intervals crossed between the two timestamps. interval is one of the same keywords as those used for TIMESTAMPADD. If (endTime > startTime), a positive number will be returned. If (endTime < startTime), a negative number will be returned. The date part difference is counted regardless of how close the timestamps are. For example, '2000-01-02 00:00:00.0' is still considered 1 hour ahead of '2000-01-01 23:59:59.999999'. Note TIMESTAMPDIFF typically returns an integer, however JBoss Data Virtualization returns a long. You will encounter an exception if you expect a value out of the integer range from a pushed down TIMESTAMPDIFF. Note The implementation of TIMESTAMPDIFF in versions returned values based upon the number of whole canonical interval approximations (365 days in a year, 91 days in a quarter, 30 days in a month, etc.) crossed. For example the difference in months between 2013-03-24 and 2013-04-01 was 0, but based upon the date parts crossed is 1. See the System Properties section in Red Hat JBoss Data Virtualization Administration and Configuration Guide for backwards compatibility. Interval in {string}; startTime, endTime in {timestamp}, returns a long. WEEK(x) Return week in year (1-53). see also System Properties for customization. x in {date, timestamp}, returns integer YEAR(x) Returns four-digit year. x in {date, timestamp}, returns integer UNIX_TIMESTAMP (unix_timestamp) Returns the long Unix timestamp (in seconds). unix_timestamp String in the default format of yyyy/mm/dd hh:mm:ss 2.4.7. Parsing Date Data Types from Strings JBoss Data Virtualization does not implicitly convert strings that contain dates presented in different formats, such as '19970101' and '31/1/1996' to date-related data types. You can, however, use the following functions to explicitly convert strings with a different format to the appropriate data type: parseDate parseTime parseTimestamp For each function, you have to provide the formatting of the string. The formatting follows the convention established by the java.text.SimpleDateFormat class. See examples below. Table 2.5. Functions to Parse Dates String Function Call To Parse String '19970101' parseDate(myDateString, 'yyyyMMdd') '31/1/1996' parseDate(myDateString, 'dd''/''MM''/''yyyy') '22:08:56 CST' parseTime (myTime, 'HH:mm:ss z') '03.24.2003 at 06:14:32' parseTimestamp(myTimestamp, 'MM.dd.yyyy ''at'' hh:mm:ss') Note Formatted strings will be based on your default Java locale. 2.4.8. Specifying Time Zones Time zones can be specified in several formats. Common abbreviations such as EST for "Eastern Standard Time" are allowed but discouraged, as they can be ambiguous. Unambiguous time zones are defined in the form continent or ocean/largest city. For example, America/New_York, America/Buenos_Aires, or Europe/London. Additionally, you can specify a custom time zone by GMT offset: GMT[+/-]HH:MM. For example: GMT-05:00 2.4.9. Type Conversion Functions Within your queries, you can convert between data types using the CONVERT or CAST keyword. Also see Section 3.2, "Type Conversions" . Table 2.6. Type Conversion Functions Function Definition CONVERT(x, type) Convert x to type, where type is a JBoss Data Virtualization Base Type CAST(x AS type) Convert x to type, where type is a JBoss Data Virtualization Base Type These functions are identical other than syntax; CAST is the standard SQL syntax, CONVERT is the standard JDBC/ODBC syntax. 2.4.10. Choice Functions Choice functions provide a way to select from two values based on some characteristic of one of the values. Table 2.7. Type Conversion Functions Function Definition Data Type Constraint COALESCE(x,y+) Returns the first non-null parameter x and all y's can be any compatible types IFNULL(x,y) If x is null, return y; else return x x, y, and the return type must be the same type but can be any type NVL(x,y) If x is null, return y; else return x x, y, and the return type must be the same type but can be any type NULLIF(param1, param2) Equivalent to case when (param1 = param2) then null else param1 param1 and param2 must be compatible comparable types Note IFNULL and NVL are aliases of each other. They are the same function. 2.4.11. Decode Functions Decode functions allow you to have JBoss Data Virtualization examine the contents of a column in a result set and alter, or decode, the value so that your application can better use the results. Table 2.8. Decode Functions Function Definition Data Type Constraint DECODESTRING(x, y [, z]) Decode column x using value pairs in y (with optional delimiter, z) and return the decoded column as a set of strings. Warning Deprecated. Use a CASE expression instead. All string DECODEINTEGER(x, y [, z]) Decode column x using value pairs in y (with optional delimiter z) and return the decoded column as a set of integers. Warning Deprecated. Use a CASE expression instead. All string parameters, return integer Within each function call, you include the following arguments: x is the input value for the decode operation. This will generally be a column name. y is the literal string that contains a delimited set of input values and output values. z is an optional parameter on these methods that allows you to specify what delimiter the string specified in y uses. For example, your application might query a table called PARTS that contains a column called IS_IN_STOCK which contains a Boolean value that you need to change into an integer for your application to process. In this case, you can use the DECODEINTEGER function to change the Boolean values to integers: SELECT DECODEINTEGER(IS_IN_STOCK, 'false, 0, true, 1') FROM PartsSupplier.PARTS; When JBoss Data Virtualization encounters the value false in the result set, it replaces the value with 0. If, instead of using integers, your application requires string values, you can use the DECODESTRING function to return the string values you need: SELECT DECODESTRING(IS_IN_STOCK, 'false, no, true, yes, null') FROM PartsSupplier.PARTS; In addition to two input/output value pairs, this sample query provides a value to use if the column does not contain any of the preceding input values. If the row in the IS_IN_STOCK column does not contain true or false, JBoss Data Virtualization inserts a null into the result set. When you use these DECODE functions, you can provide as many input/output value pairs as you would like within the string. By default, JBoss Data Virtualization expects a comma delimiter, but you can add a third parameter to the function call to specify a different delimiter: SELECT DECODESTRING(IS_IN_STOCK, 'false:no:true:yes:null',':') FROM PartsSupplier.PARTS; You can use keyword null in the DECODE string as either an input value or an output value to represent a null value. However, if you need to use the literal string null as an input or output value (which means the word null appears in the column and not a null value) you can put the word in quotes: "null". SELECT DECODESTRING( IS_IN_STOCK, 'null,no,"null",no,nil,no,false,no,true,yes' ) FROM PartsSupplier.PARTS; If the DECODE function does not find a matching output value in the column and you have not specified a default value, the DECODE function will return the original value JBoss Data Virtualization found in that column. 2.4.12. Lookup Function The Lookup function provides a way to speed up access to values in a lookup table (also known as a code table or reference table). The Lookup function caches all key and return column pairs specified in the function for the given table. Subsequent lookups against the same table using the same key and return columns will use the cached values. This caching accelerates response time to queries that use the lookup tables. In the following example, based on the lookup table, codeTable , the following function will find the row where keyColumn has the value, keyValue , and return the associated returnColumn value (or null if no matching key is found). codeTable must be a string literal that is the fully qualified name of the target table. returnColumn and keyColumn must also be string literals and match corresponding column names in codeTable . keyValue can be any expression that must match the datatype of the keyColumn . The return data type matches that of returnColumn . Consider the following example in which the ISOCountryCodes table is used to translate country names to ISO codes: CountryName represents a key column and CountryCode represents the ISO code of the country. A query to this lookup table would provide a CountryName , in this case 'UnitedStates', and expect a CountryCode in response. Note JBoss Data Virtualization unloads these cached lookup tables when you stop and restart JBoss Data Virtualization. Thus, it is best not to use this function for data that is subject to updates or specific to a session or user (including row based security and column masking effects). It is best used for data that does not change over time. See the Red Hat JBoss Data Virtualization Administration and Configuration Guide for more on the caching aspects of the lookup function. Important The key column must contain unique values. If the column contains duplicate values, an exception will be thrown. 2.4.13. System Functions System functions provide access to information in JBoss Data Virtualization from within a query. Function Definition Data Type Constraint COMMANDPAYLOAD([key]) If the key parameter is provided, the command payload object is cast to a java.util.Properties object and the corresponding property value for the key is returned. If the key is not specified, the return value is the command payload toString value. The command payload is set by the TeiidStatement.setPayload method on the Data Virtualization JDBC API extensions on a per-query basis. key in {string}, return value is string ENV(key) Retrieve a system environment property. Note The only key specific to the current session is 'sessionid'. The preferred mechanism for getting the session id is with the session_id() function. Note To prevent untrusted access to system properties, this function is not enabled by default. The ENV function may be enabled via the allowEnvFunction property. key in {string}, return value is string SESSION_ID() Retrieve the string form of the current session id. return value is string USER() Retrieve the name of the user executing the query. return value is string CURRENT_DATABASE() Retrieve the catalog name of the database which, for the VDB, is the VDB name. return value is string TEIID_SESSION_GET(name) Retrieve the session variable. A null name will return a null value. Typically you will use the a get wrapped in a CAST to convert to the desired type. name in {string}, return value is object TEIID_SESSION_SET(name, value) Set the session variable. The value for the key or null will be returned. A set has no effect on the current transaction and is not affected by commit/rollback. name in {string}, value in {object}, return value is object. NODE_ID() This retrieves the node id. This is typically the system property value for jboss.node.name which is not set for Red Hat JDV embedded. The returned value is a string. 2.4.14. XML Functions XML functions allow you to work with XML data. The examples provided for the XML functions use this table structure: The table structure is populated with this example data: Table 2.9. Sample Data CustomerID CustomerName ContactName Address City PostalCode Country 87 Wartian Herkku Pirkko Koskitalo Torikatu 38 Oulu 90110 Finland 88 Wellington Importadora Paula Parente Rua do Mercado, 12 Resende 08737-363 Brazil 89 White Clover Markets Karl Jablonski 305 - 14th Ave. S. Suite 3B Seattle 98128 USA XMLCAST Cast to or from XML: The expression or type must be XML. The returned value will be a type. This is the same functionality as XMLTABLE uses to convert values to the desired runtime type, with the exception that array type targets are not supported with XMLCAST. XMLCOMMENT This returns an XML comment. The comment is a string. The returned value is XML. XMLCONCAT This returns XML with the concatenation of the given XML types. If a value is null, it will be ignored. If all values are null, null is returned. This is how you concatenate two or more XML fragments: The content is XML. The returned value is XML. XMLELEMENT Returns an XML element with the given name and content. If the content value is of a type other than XML, it will be escaped when added to the parent element. Null content values are ignored. Whitespace in XML or the string values of the content is preserved, but no whitespace is added between content values. XMLNAMESPACES is used to provide namespace information. NO DEFAULT is equivalent to defining the default namespace to the null URI - xmlns="" . Only one DEFAULT or NO DEFAULT namespace item may be specified. The namespace prefixes xmlns and xml are reserved. If an attribute name is not supplied, the expression must be a column reference, in which case the attribute name will be the column name. Null attribute values are ignored. For example, with an xml_value of <doc/>, returns name and prefix are identifiers. uri is a string literal. content can be any type. Return value is XML. The return value is valid for use in places where a document is expected. XMLFOREST Returns an concatenation of XML elements for each content item. See XMLELEMENT for the definition of NSP. If a name is not supplied for a content item, the expression must be a column reference, in which case the element name will be a partially escaped version of the column name. name is an identifier. content can be any type. Return value is XML. You can use XMLFORREST to simplify the declaration of multiple XMLELEMENTS, XMLFOREST function allows you to process multiple columns at once: XMLAGG XMLAGG is an aggregate function, that takes a collection of XML elements and returns an aggregated XML document. In the XMLElement example, each row in the Customer table generates a row of XML if there are multiple rows matching the criteria. This will be valid XML, but it will not be well formed, because it lacks the root element. Use XMLAGG to correct that: XMLPARSE Returns an XML type representation of the string value expression. If DOCUMENT is specified, then the expression must have a single root element and may or may not contain an XML declaration. If WELLFORMED is specified then validation is skipped; this is especially useful for CLOB and BLOB known to already be valid. expr in {string, clob, blob and varbinary}. Return value is XML. If DOCUMENT is specified then the expression must have a single root element and may or may not contain an XML declaration. If WELLFORMED is specified then validation is skipped; this is especially useful for CLOB and BLOB known to already be valid. Will return a SQLXML with contents: XMLPI Returns an XML processing instruction. name is an identifier. content is a string. Return value is XML. XMLQUERY Returns the XML result from evaluating the given xquery . See XMLELEMENT for the definition of NSP. Namespaces may also be directly declared in the XQuery prolog. The optional PASSING clause is used to provide the context item, which does not have a name, and named global variable values. If the XQuery uses a context item and none is provided, then an exception will be raised. Only one context item may be specified and should be an XML type. All non-context non-XML passing values will be converted to an appropriate XML type. The ON EMPTY clause is used to specify the result when the evaluated sequence is empty. EMPTY ON EMPTY, the default, returns an empty XML result. NULL ON EMPTY returns a null result. xquery in string. Return value is XML. Note XMLQUERY is part of the SQL/XML 2006 specification. See also XMLTABLE. XMLEXISTS Returns true if a non-empty sequence would be returned by evaluating the given xquery. Namespaces may also be directly declared in the xquery prolog. The optional PASSING clause is used to provide the context item, which does not have a name, and named global variable values. If the xquery uses a context item and none is provided, then an exception will be raised. Only one context item may be specified and should be an XML type. All non-context non-XML passing values will be converted to an appropriate XML type. Null/Unknown will be returned if the context item evaluates to null. xquery in string. Return value is boolean. XMLEXISTS is part of the SQL/XML 2006 specification. XMLSERIALIZE Returns a character type representation of the XML expression. datatype may be character (string, varchar, clob) or binary (blob, varbinary). CONTENT is the default. If DOCUMENT is specified and the XML is not a valid document or fragment, then an exception is raised. Return value matches data type. If no data type is specified, then CLOB will be assumed. The encoding enc is specified as an identifier. A character serialization may not specify an encoding. The version ver is specified as a string literal. If a particular XMLDECLARATION is not specified, then the result will have a declaration only if performing a non UTF-8/UTF-16 or non version 1.0 document serialization or the underlying XML has an declaration. If CONTENT is being serialized, then the declaration will be omitted if the value is not a document or element. The following example produces a BLOB of XML in UTF-16 including the appropriate byte order mark of FE FF and XML declaration: XMLTEXT This returns XML text. The text is a string and the returned value is XML. XSLTRANSFORM Applies an XSL stylesheet to the given document. doc and xsl in {string, clob, xml}. Return value is a CLOB. If either argument is null, the result is null. XPATHVALUE Applies the XPATH expression to the document and returns a string value for the first matching result. For more control over the results and XQuery, use the XMLQUERY function. Matching a non-text node will still produce a string result, which includes all descendant text nodes. doc and xpath in {string, clob, xml}. Return value is a string. When the input document utilizes namespaces, it is sometimes necessary to specify XPATH that ignores namespaces. For example, given the following XML, the following function results in 'Hello World'. 2.4.15. JSON Functions JSON functions provide functionality for working with JSON (JavaScript Object Notation) data. JSONTOXML Returns an XML document from JSON. The appropriate UTF encoding (8, 16LE. 16BE, 32LE, 32BE) will be detected for JSON BLOBS. If another encoding is used, see the TO_CHARS function (see Section 2.4.5, "String Functions" ). rootElementName is a string, json is in {clob, blob}. Return value is XML. The result is always a well-formed XML document. The mapping to XML uses the following rules: The current element name is initially the rootElementName , and becomes the object value name as the JSON structure is traversed. All element names must be valid XML 1.1 names. Invalid names are fully escaped according to the SQLXML specification. Each object or primitive value will be enclosed in an element with the current name. Unless an array value is the root, it will not be enclosed in an additional element. Null values will be represented by an empty element with the attribute xsi:nil="true" Boolean and numerical value elements will have the attribute xsi:type set to boolean and decimal respectively. Example 2.1. Sample JSON to XML for jsonToXml('person', x) JSON: XML: <?xml version="1.0" ?><person><firstName>John</firstName><children>Randy</children><children>Judy</children></person> Example 2.2. Sample JSON to XML for jsonToXml('person', x) with a root array. JSON: XML (Notice there is an extra "person" wrapping element to keep the XML well-formed): <?xml version="1.0" ?><person><person><firstName>George</firstName></person><person><firstName>Jerry</firstName></person></person> JSON: Example 2.3. Sample JSON to XML for jsonToXml('root', x) with an invalid name. XML: Example 2.4. Sample JSON to XML for jsonToXml('root', x) with an invalid name. JSONARRAY Returns a JSON array. value is any object convertable to a JSON value (see Section 2.4.16, "Conversion to JSON" ). Return value is a CLOB marked as being valid JSON. Null values will be included in the result as null literals. For example: returns JSONOBJECT Returns a JSON object. value is any object convertable to a JSON value (see Section 2.4.16, "Conversion to JSON" ). Return value is a clob marked as being valid JSON. Null values will be included in the result as null literals. If a name is not supplied and the expression is a column reference, the column name will be used otherwise exprN will be used where N is the 1-based index of the value in the JSONARRAY expression. For example: returns JSONPARSE Validates and returns a JSON result. value is blob with an appropriate JSON binary encoding (UTF-8, UTF-16, or UTF-32) or clob. wellformed is a boolean indicating that validation should be skipped. Return value is a CLOB marked as being valid JSON. A null for either input will return null. JSONARRAY_AGG This creates a JSON array result as a Clob, including a null value. This is similar to JSONARRAY but aggregates its contents into single object. You can also wrap the array: 2.4.16. Conversion to JSON A straightforward specification compliant conversion is used for converting values into their appropriate JSON document form. null values are included as the null literal. values parsed as JSON or returned from a JSON construction function (JSONPARSE, JSONARRAY, JSONARRAY_AGG) will be directly appended into a JSON result. boolean values are included as true/false literals numeric values are included as their default string conversion - in some circumstances if not a number or +-infinity results are allowed, invalid JSON may be obtained. string values are included in their escaped/quoted form. binary values are not implicitly convertible to JSON values and require a specific prior to inclusion in JSON. all other values will be included as their string conversion in the appropriate escaped/quoted form. 2.4.17. Spatial Functions Spatial functions provide functionality for working with geospatial data. Red Hat JBoss Data Virtualization relies on the JTS Topology Suite to provide partial support for the OpenGIS Simple Features Specification For SQL Revision 1.1. Most Geometry support is limited to two dimensions due to the WKB and WKT formats. Important Geometry support is still evolving. There may be minor differences between Data Virtualization and pushdown results that will need to be further refined. Conversion Functions ST_GeomFromText Returns a geometry from a Clob in WKT format. text is a clob, srid is an optional integer. Return value is a geometry. ST_GeomFromWKB/ST_GeomFromBinary Returns a geometry from a blob in WKB format. bin is a blob, srid is an optional integer. Return value is a geometry. ST_GeomFromGeoJSON Returns a geometry from a Clob in GeoJSON format. text is a clob, srid is an optional integer. Return value is a geometry. ST_GeomFromGML Returns a geometry from a Clob in GML2 format. text is a clob, srid is an optional integer. Return value is a geometry. ST_AsText The geom is a geometry. Return value is clob in WKT format. ST_AsBinary The geom is a geometry. Return value is a blob in WKB format. ST_GeomFromEWKB Returns a geometry from a blob in EWKB format. The bin is a blob. Return value is a geometry. Only two dimensions are supported. ST_AsGeoJSON The geom is a geometry. Return value is a clob with the GeoJSON value. ST_AsGML The geom is a geometry. Return value is a clob with the GML2 value. ST_AsEWKT The geom is a geometry. Return value is a clob with the EWKT value. The EWKT value is the WKT value with the SRID prefix. ST_AsKML The geom is a geometry. Return value is a clob with the KML value. The KML value is effectively a simplified GML value and projected into SRID 4326. Operators && Returns true if the bounding boxes of geom1 and geom2 intersect. geom1 and geom2 are geometries. The returned value is a boolean. Relationship Functions ST_CONTAINS Returns true if geom1 contains geom2 contains another. geom1, geom2 are geometries. Return value is a boolean. ST_CROSSES Returns true if the geometries cross. The geom1 and geom2 are geometries. Return value is a boolean. ST_DISJOINT Returns true if the geometries are disjoint. The geom1 and geom2 are geometries. Return value is a boolean. ST_DISTANCE Returns the distance between two geometries. The geom1 and geom2 are geometries. Return value is a double. ST_EQUALS Returns true if the two geometries are spatially equal - the points and order may differ, but neither geometry lies outside of the other. The geom1 and geom2 are geometries. Return value is a boolean. ST_INTERSECTS Returns true if the geometries intersect. The geom1 and geom2 are geometries. Return value is a boolean. ST_OVERLAPS Returns true if the geometries overlap. The geom1 and geom2 are geometries. Return value is a boolean. ST_TOUCHES Returns true if the geometries touch. The geom1 and geom2 are geometries. Return value is a boolean. ST_DWithin Returns true if the geometries are within a given distance of one another. geom1 and geom2 are geometries. dist is a double. The returned value is a boolean. ST_OrderingEquals Returns true if geom1 and geom2 have the same structure and the same ordering of points. geom1 and geom2 are geometries. The returned value is a boolean. ST_Relate Test or return the intersection of geom1 and geom2. The geom1 and geom2 are geometries. Pattern is a nine character DE-9IM pattern string. The returned value is a boolean. ST_Within Returns true if geom1 is completely inside geom2. The geom1 and geom2 are geometries. The returned value is a boolean. Attributes and Tests ST_Area Returns the area of geom. The geom is a geometry. Return value is a double. ST_CoordDim Returns the coordinate dimensions of geom. The geom is a geometry. The returned value is an integer between 0 and 3. ST_Dimension This returns the dimension of geom. The geom is a geometry. The returned value is an integer between 0 and 3. ST_EndPoint This returns the endpoint of the LineString geom. Returns null if geom is not a LineString. The geom is a geometry. The returned value is a geometry. ST_ExteriorRing Returns the exterior ring or shell LineString of the Polygon geom. Returns null if geom is not a Polygon. The geom is a geometry. The returned value is a geometry. ST_GeometryN Returns the nth geometry at the given 1-based index in geom. Returns null if a geometry at the given index does not exist. Non collection types return themselves at the first index. The geom is a geometry. The index is an integer. The returned value is a geometry. ST_GeometryType Returns the type name of geom as ST_name, where the name will be LineString, Polygon, Point and so forth. The geom is a geometry. The returned value is a string. ST_HasArc Tests if the geometry has a circular string. Will currently only report false as curved geometry types are not supported. The geom is a geometry. The returned value is a geometry. ST_InteriorRingN Returns the nth interior ring LinearString geometry at the given 1-based index in geom. Returns null if a geometry at the given index does not exist or if geom is not a Polygon. The geom is a geometry. The index is an integer. The returned value is a geometry. ST_IsClosed Returns true if LineString geom is closed. Returns false if geom is not a LineString The geom is a geometry. The index is an integer. The returned value is a boolean. ST_IsEmpty Returns true if the set of points is empty. The geom is a geometry. The returned value is a boolean. ST_IsRing Returns true if the LineString geom is a ring. Returns false if geom is not a LineString. The geom is a geometry. The returned value is a boolean. ST_IsSimple Returns true if the geom is simple. The geom is a geometry. The returned value is a boolean. ST_IsValid Returns true if the geom is valid. The geom is a geometry. The returned value is a boolean. ST_Length Returns the length of a (Multi)LineString otherwise 0. The geom is a geometry. The returned value is a double. ST_NumGeometries Returns the number of geometries in the geom. Will return 1 if it is not a geometry collection. The geom is a geometry. The returned value is an integer. ST_NumInteriorRings Returns the number of interior rings in the Polygon geom. Returns null if geom is not a Polygon. The geom is a geometry. The returned value is an integer. ST_NunPoints Returns the number of points in a geom. The geom is a geometry. The returned value is an integer. ST_PointOnSurface Returns a point that is guaranteed to be on the surface of the geom. The geom is a geometry. The returned value is a point geometry. ST_Perimeter Returns the perimeter of the (Multi)Polygon geom. It ill return 0 if the geom is not a (multi)polygon. The geom is a geometry. The returned value is a double. ST_PointN Returns the nth Point at the given 1-based index in geom. Returns null if a point at the given index does not exist or if the geom is not a LineString. The geom is a geometry. The index is an integer. The returned value is a geometry. ST_SRID Returns the SRID for the geometry. The geom is a geometry. Return value is an integer. A 0 value rather than null will be returned for an unknown SRID on a non-null geometry. ST_SetSRID Set the SRID for the given geometry. The geom is a geometry. The srid is an integer. The returned value is a geometry. Only the SRID metadata for the geometry is modified. ST_StartPoint Returns the start Point of the LineString geom. Returns null if geom is not a LineString. The geom is a geometry. The returned value is a geometry. ST_X Returns the X ordinate value, or null if the point is empty. It throws an exception if the geometry is not a point. The geom is a geometry. The returned value is a double. ST_Y Returns the Y ordinate value, or null if the point is empty. It throws an exception if the geometry is not a point. The geom is a geometry. The returned value is a double. ST_Z Returns the Z ordinate value, or null if the point is empty. It throws an exception if the geometry is not a point. It will typically return null as three dimensions are not fully supported. The geom is a geometry. The returned value is a double. Miscellaneous Functions ST_Boundary Computes the boundary of the given geometry. The geom is a geometry. The returned value is a geometry. ST_Buffer Computes the geometry that has points within the given distance of a geom. The geom is a geometry. The distance is a double. The returned value is a geometry. ST_Centroid Computes the geometric center point of a geom. The geom is a geometry. The returned value is a geometry. ST_ConvexHull Return the smallest convex polygon that contains all of the points in a geom. The geom is a geometry. The returned value is a geometry. ST_Difference Computes the closure of the set of the points contained in geom1 that are not in geom2. The geom1 and geom2 are the geometry. The returned value is a geometry. ST_Envelope Computes the 2D bounding box of the given geometry. The geom is a geometry. The returned value is a geometry. ST_Force_2D Removes the z coordinate value if it is present. The geom is a geometry. The returned value is a geometry. ST_Intersection Computes the point set intersection of the points contained in geom1 and geom2. The geom1 and geom2 are the geometry. The returned value is a geometry. ST_Simplify Simplifies a geometry using the Douglas-Peucker algorithm, but may oversimplify to an invalid or empty geometry. The geom is a geometry. distanceTolerance is a double. The returned value is a geometry. ST_SimplifyPreserveTopology Simplifies a Geometry using the Douglas-Peucker algorithm. This always returns a valid geometry. The geom is a geometry. distanceTolerance is a double. The returned value is a geometry. ST_SnapToGrid Snaps all of the points in the geometry to a grid of a given size. The geom is a geometry. Size is a double. The returned value is a geometry. ST_SymDifference Return the part of geom1 that does not intersect with geom2 and vice versa. The geom1 and geom2 are the geometry. The returned value is a geometry. ST_Transform Transforms the geometry value from one coordinate system to another. The geom is a geometry. srid is an integer. Return value is a geometry. The srid value and the srid of the geometry value must exist in the SPATIAL_REF_SYS view. ST_Union Returns a geometry that represents the point set containing all of geom1 and geom2. The geom1 and geom2 are the geometry. The returned value is a geometry. Aggregate Functions ST_Extent Computes the 2D bounding box around all of the geometric values. All values should have the same srid. The geom is a geometry. The returned value is a geometry. Construction Functions ST_Point Returns the point for the given coordinates. The x and y are doubles. The returned value is a point geometry. ST_Polygon Returns the polygon for the given shell and srid. The geom is a linear ring geometry and the srid is an integer. The returned value is a polygon geometry. 2.4.18. Security Functions Security functions provide the ability to interact with the security system. HASROLE Whether the current caller has the JBoss Data Virtualization data role roleName . roleName must be a string, the return type is boolean. The two argument form is provided for backwards compatibility. roleType is a string and must be 'data'. Role names are case-sensitive and only match JBoss Data Virtualization data roles (see Section 7.1, "Data Roles" ). JAAS roles/groups names are not valid for this function, unless there is corresponding data role with the same name. 2.4.19. Miscellaneous Functions array_get Returns the object value at a given array index. array is the object type, index must be an integer, and the return type is object. One-based indexing is used. The actual array value must be a java.sql.Array or Java array type. An exception will be thrown if the array value is the wrong type of the index is out of bounds. array_length Returns the length for a given array. array is the object type, and the return type is integer. The actual array value must be a java.sql.Array or Java array type. An exception will be thrown if the array value is the wrong type. uuid Returns a universally unique identifier. The return type is string. Generates a type 4 (pseudo randomly generated) UUID using a cryptographically strong random number generator. The format is XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX where each X is a hex digit. 2.4.20. Nondeterministic Function Handling JBoss Data Virtualization categorizes functions by varying degrees of determinism. When a function is evaluated and to what extent the result can be cached are based upon its determinism level. Deterministic - the function will always return the same result for the given inputs. Deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. Some functions, such as the lookup function, are not truly deterministic, but is treated as such for performance. All functions not categorized below are considered deterministic. User Deterministic - the function will return the same result for the given inputs for the same user. This includes the hasRole and user functions. User deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. If a user deterministic function is evaluated during the creation of a prepared processing plan, then the resulting plan will be cached only for the user. Session Deterministic - the function will return the same result for the given inputs under the same user session. This category includes the env function. Session deterministic functions are evaluated by the engine as soon as all input values are known, which may occur as soon as the rewrite phase. If a session deterministic function is evaluated during the creation of a prepared processing plan, then the resulting plan will be cached only for the user's session. Command Deterministic - the result of function evaluation is only deterministic within the scope of the user command. This category include the curdate , curtime , now , and commandpayload functions. Command deterministic functions are delayed in evaluation until processing to ensure that even prepared plans utilizing these functions will be executed with relevant values. Command deterministic function evaluation will occur prior to pushdown; however, multiple occurrences of the same command deterministic time function are not guaranteed to evaluate to the same value. Nondeterministic - the result of function evaluation is fully nondeterministic. This category includes the rand function and UDFs marked as nondeterministic. Nondeterministic functions are delayed in evaluation until processing with a preference for pushdown. If the function is not pushed down, then it may be evaluated for every row in its execution context (for example, if the function is used in the select clause). | [
"REPLACE(x, y, z)",
"REGEXP_REPLACE(str, pattern, sub [, flags])",
"regexp_replace('Goodbye World', '[g-o].', 'x', 'gi')",
"SELECT DECODEINTEGER(IS_IN_STOCK, 'false, 0, true, 1') FROM PartsSupplier.PARTS;",
"SELECT DECODESTRING(IS_IN_STOCK, 'false, no, true, yes, null') FROM PartsSupplier.PARTS;",
"SELECT DECODESTRING(IS_IN_STOCK, 'false:no:true:yes:null',':') FROM PartsSupplier.PARTS;",
"SELECT DECODESTRING( IS_IN_STOCK, 'null,no,\"null\",no,nil,no,false,no,true,yes' ) FROM PartsSupplier.PARTS;",
"LOOKUP(codeTable, returnColumn, keyColumn, keyValue)",
"lookup('ISOCountryCodes', 'CountryCode', 'CountryName', 'UnitedStates')",
"TABLE Customer ( CustomerId integer PRIMARY KEY, CustomerName varchar(25), ContactName varchar(25) Address varchar(50), City varchar(25), PostalCode varchar(25), Country varchar(25), );",
"XMLCAST(expression AS type)",
"XMLCOMMENT(comment)",
"XMLCONCAT(content [, content]*)",
"SELECT XMLCONCAT( XMLELEMENT(\"name\", CustomerName), XMLPARSE(CONTENT ' <a> b </a>' WELLFORMED) ) FROM Customer c WHERE c.CustomerID = 87; ========================================================== <name> Wartian Herkku </name> <a> b </a>",
"XMLELEMENT([NAME] name [, <NSP>] [, <ATTR>][, content]*) ATTR:=XMLATTRIBUTES(exp [AS name] [, exp [AS name]]*) NSP:=XMLNAMESPACES((uri AS prefix | DEFAULT uri | NO DEFAULT))+",
"XMLELEMENT(NAME \"elem\", 1, '<2/>', xml_value)",
"<elem>1<2/><doc/><elem/>",
"SELECT XMLELEMENT(\"name\", CustomerName) FROM Customer c WHERE c.CustomerID = 87; ========================================================== <name>Wartian Herkku</name> \"Multiple Columns\" SELECT XMLELEMENT(\"customer\", XMLELEMENT(\"name\", c.CustomerName), XMLELEMENT(\"contact\", c.ContactName)) FROM Customer c WHERE c.CustomerID = 87; ========================================================== <customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer> \"Columns as Attributes\" SELECT XMLELEMENT(\"customer\", XMLELEMENT(\"name\", c.CustomerName, XMLATTRIBUTES( \"contact\" as c.ContactName, \"id\" as c.CustomerID ) ) ) FROM Customer c WHERE c.CustomerID = 87; ========================================================== <customer> <name contact=\"Pirkko Koskitalo\" id=\"87\">Wartian Herkku</name> </customer>",
"XMLFOREST(content [AS name] [, <NSP>] [, content [AS name]]*)",
"SELECT XMLELEMENT(\"customer\", XMLFOREST( c.CustomerName AS \"name\", c.ContactName AS \"contact\" )) FROM Customer c WHERE c.CustomerID = 87; ========================================================== <customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer> XMLAGG XMLAGG is an aggregate function, that takes a collection of XML elements and returns an aggregated XML document. XMLAGG(xml) From above example in XMLElement, each row in the Customer table table will generate row of XML if there are multiple rows matching the criteria. That will generate a valid XML, but it will not be well formed, because it lacks the root element. XMLAGG can used to correct that \"Example\" SELECT XMLELEMENT(\"customers\", XMLAGG( XMLELEMENT(\"customer\", XMLFOREST( c.CustomerName AS \"name\", c.ContactName AS \"contact\" ))) FROM Customer c ========================================================== <customers> <customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer> <customer><name>Wellington Importadora</name><contact>Paula Parente</contact></customer> <customer><name>White Clover Markets</name><contact>Karl Jablonski</contact></customer> </customers>",
"XMLAGG(xml)",
"SELECT XMLELEMENT(\"customers\", XMLAGG( XMLELEMENT(\"customer\", XMLFOREST( c.CustomerName AS \"name\", c.ContactName AS \"contact\" ))) FROM Customer c ========================================================== <customers> <customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer> <customer><name>Wellington Importadora</name><contact>Paula Parente</contact></customer> <customer><name>White Clover Markets</name><contact>Karl Jablonski</contact></customer> </customers>",
"XMLPARSE((DOCUMENT|CONTENT) expr [WELLFORMED])",
"SELECT XMLPARSE(CONTENT '<customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer>' WELLFORMED);",
"<customer><name>Wartian Herkku</name><contact>Pirkko Koskitalo</contact></customer>",
"XMLPI([NAME] name [, content])",
"XMLQUERY([<NSP>] xquery [<PASSING>] [(NULL|EMPTY) ON EMPTY]] PASSING:=PASSING exp [AS name] [, exp [AS name]]*",
"XMLEXISTS([<NSP>] xquery [<PASSING>]] PASSING:=PASSING exp [AS name] [, exp [AS name]]*",
"XMLSERIALIZE([(DOCUMENT|CONTENT)] xml [AS datatype] [ENCODING enc] [VERSION ver] [(INCLUDING|EXCLUDING) XMLDECLARATION])",
"XMLSERIALIZE(DOCUMENT value AS BLOB ENCODING \"UTF-16\" INCLUDING XMLDECLARATION)",
"XMLTEXT(text)",
"XSLTRANSFORM(doc, xsl)",
"XPATHVALUE(doc, xpath)",
"<?xml version=\"1.0\" ?> <ns1:return xmlns:ns1=\"http://com.test.ws/exampleWebService\">Hello<x> World</x></return>",
"xpathValue(value, '/*[local-name()=\"return\"])",
"JSONTOXML(rootElementName, json)",
"{ \"firstName\" : \"John\" , \"children\" : [ \"Randy\", \"Judy\" ] }",
"<?xml version=\"1.0\" ?><person><firstName>John</firstName><children>Randy</children><children>Judy</children></person>",
"[{ \"firstName\" : \"George\" }, { \"firstName\" : \"Jerry\" }]",
"<?xml version=\"1.0\" ?><person><person><firstName>George</firstName></person><person><firstName>Jerry</firstName></person></person>",
"{\"/invalid\" : \"abc\" }",
"<?xml version=\"1.0\" ?> <root> <_u002F_invalid>abc</_u002F_invalid> </root>",
"JSONARRAY(value...)",
"jsonArray('a\"b', 1, null, false, {d'2010-11-21'})",
"[\"a\\\"b\",1,null,false,\"2010-11-21\"]",
"JSONARRAY(value [as name] ...)",
"jsonObject('a\"b' as val, 1, null as \"null\")",
"{\"val\":\"a\\\"b\",\"expr2\":1,\"null\":null}",
"JSONPARSE(value, wellformed)",
"jsonParse('\"a\"')",
"SELECT JSONARRAY_AGG(JSONOBJECT(CustomerId, CustomerName)) FROM Customer c WHERE c.CustomerID >= 88; ========================================================== [{\"CustomerId\":88, \"CustomerName\":\"Wellington Importadora\"}, {\"CustomerId\":89, \"CustomerName\":\"White Clover Markets\"}]",
"SELECT JSONOBJECT(JSONARRAY_AGG(JSONOBJECT(CustomerId as id, CustomerName as name)) as Customer) FROM Customer c WHERE c.CustomerID >= 88; ========================================================== {\"Customer\":[{\"id\":89,\"name\":\"Wellington Importadora\"},{\"id\":100,\"name\":\"White Clover Markets\"}]}",
"ST_GeomFromText(text [, srid])",
"ST_GeomFromWKB(bin [, srid])",
"ST_GeomFromGeoJson(text [, srid])",
"ST_GeomFromGML(text [, srid])",
"ST_GeomAsText(geom)",
"ST_GeomAsBinary(geom)",
"ST_GeomFromEWKB(bin)",
"ST_GeomAsGeoJSON(geom)",
"ST_GeomAsGML(geom)",
"ST_AsEWKT(geom)",
"ST_AsKML(geom)",
"geom1 && geom2",
"ST_CONTAINS(geom1, geom2)",
"ST_CROSSES(geom1, geom2)",
"ST_DISJOINT(geom1, geom2)",
"ST_DISTANCE(geom1, geom2)",
"ST_EQUALS(geom1, geom2)",
"ST_INTERSECT(geom1, geom2)",
"ST_OVERLAPS(geom1, geom2)",
"ST_TOUCHES(geom1, geom2)",
"ST_DWithin(geom1, geom2, dist)",
"ST_OrderingEquals(geom1, geom2)",
"ST_Relate(geom1, geom2, pattern)",
"ST_Within(geom1, geom2)",
"ST_Area(geom)",
"ST_CoordDim(geom)",
"ST_Dimension(geom)",
"ST_EndPoint(geom)",
"ST_ExteriorRing(geom)",
"ST_GeometryN(geom, index)",
"ST_GeometryType(geom)",
"ST_HasArc(geom)",
"ST_InteriorRingN(geom, index)",
"ST_IsClosed(geom)",
"ST_IsEmpty(geom)",
"ST_IsRing(geom)",
"ST_IsSimple(geom)",
"ST_IsValid(geom)",
"ST_Length(geom)",
"ST_NumGeometries(geom)",
"ST_NumInteriorRings(geom)",
"ST_NunPoints(geom)",
"ST_PointOnSurface(geom)",
"ST_Perimeter(geom)",
"ST_PointN(geom, index)",
"ST_SRID(geom)",
"ST_SetSRID(geom, srid)",
"ST_StartPoint(geom)",
"ST_X(geom)",
"ST_Y(geom)",
"ST_Z(geom)",
"ST_Boundary(geom)",
"ST_Buffer(geom, distance)",
"ST_Centroid(geom)",
"ST_ConvexHull(geom)",
"ST_Difference(geom1, geom2)",
"ST_Envelope(geom)",
"ST_Force_2D(geom)",
"ST_Intersection(geom1, geom2)",
"ST_Simplify(geom, distanceTolerance)",
"ST_SimplifyPreserveTopology(geom, distanceTolerance)",
"ST_SnapToGrid(geom, size)",
"ST_SymDifference(geom1, geom2)",
"ST_Tranform(geom, srid)",
"ST_Union(geom1, geom2)",
"ST_Extent(geom)",
"ST_Point(x, y)",
"ST_Polygon(geom, srid)",
"hasRole([roleType,] roleName)",
"array_get(array, index)",
"array_length(array)",
"uuid()"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-Scalar_Functions |
Chapter 3. Getting started with OpenShift Virtualization | Chapter 3. Getting started with OpenShift Virtualization You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment. Note Cluster configuration procedures require cluster-admin privileges. 3.1. Planning and installing OpenShift Virtualization Plan and install OpenShift Virtualization on an OpenShift Container Platform cluster: Plan your bare metal cluster for OpenShift Virtualization . Prepare your cluster for OpenShift Virtualization . Install the OpenShift Virtualization Operator . Install the virtctl command line interface (CLI) tool . Planning and installation resources Using a CSI-enabled storage provider . Configuring local storage for virtual machines . Installing the Kubernetes NMState Operator . Specifying nodes for virtual machines . Virtctl commands . 3.2. Creating and managing virtual machines Create virtual machines (VMs) by using the web console: Quick create a VM . Customize a template to create a VM . Connect to the VMs: Connect to the serial console or VNC console of a VM by using the web console. Connect to a VM by using SSH . Connect to a Windows VM by using RDP . Manage the VMs: Stop, start, pause, and restart a VM by using the web console . Manage a VM, expose a port, or connect to the serial console by using the virtctl CLI tool . 3.3. steps Connect the VMs to secondary networks: Connect a VM to a Linux bridge network . Connect a VM to an SR-IOV network . Note VMs are connected to the pod network by default. You must configure a secondary network, such as Linux bridge or SR-IOV, and then add the network to the VM configuration. Monitor resources, details, status, and top consumers by using the web console . View high-level information about VM workloads by using the web console . View OpenShift Virtualization logs by using the CLI . Automate Windows VM deployments with sysprep . Live migrate VMs . Back up and restore VMs . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/virt-getting-started |
Release notes for Red Hat build of OpenJDK 11.0.18 | Release notes for Red Hat build of OpenJDK 11.0.18 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.18/index |
4.8. Data Journaling | 4.8. Data Journaling Ordinarily, GFS writes only metadata to its journal. File contents are subsequently written to disk by the kernel's periodic sync that flushes file-system buffers. An fsync() call on a file causes the file's data to be written to disk immediately. The call returns when the disk reports that all data is safely written. Data journaling can result in a reduced fsync() time, especially for small files, because the file data is written to the journal in addition to the metadata. An fsync() returns as soon as the data is written to the journal, which can be substantially faster than the time it takes to write the file data to the main file system. Applications that rely on fsync() to sync file data may see improved performance by using data journaling. Data journaling can be enabled automatically for any GFS files created in a flagged directory (and all its subdirectories). Existing files with zero length can also have data journaling turned on or off. Using the gfs_tool command, data journaling is enabled on a directory (and all its subdirectories) or on a zero-length file by setting the inherit_jdata or jdata attribute flags to the directory or file, respectively. The directory and file attribute flags can also be cleared. Usage Setting and Clearing the inherit_jdata Flag Setting and Clearing the jdata Flag Directory Specifies the directory where the flag is set or cleared. File Specifies the zero-length file where the flag is set or cleared. Examples This example shows setting the inherit_jdata flag on a directory. All files created in the directory or any of its subdirectories will have the jdata flag assigned automatically. Any data written to the files will be journaled. This example shows setting the jdata flag on a file. The file must be zero size. Any data written to the file will be journaled. | [
"gfs_tool setflag inherit_jdata Directory gfs_tool clearflag inherit_jdata Directory",
"gfs_tool setflag jdata File gfs_tool clearflag jdata File",
"gfs_tool setflag inherit_jdata /gfs1/data/",
"gfs_tool setflag jdata /gfs1/datafile"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-data-journal |
Chapter 14. Open Container Initiative support | Chapter 14. Open Container Initiative support Container registries were originally designed to support container images in the Docker image format. To promote the use of additional runtimes apart from Docker, the Open Container Initiative (OCI) was created to provide a standardization surrounding container runtimes and image formats. Most container registries support the OCI standardization as it is based on the Docker image manifest V2, Schema 2 format. In addition to container images, a variety of artifacts have emerged that support not just individual applications, but also the Kubernetes platform as a whole. These range from Open Policy Agent (OPA) policies for security and governance to Helm charts and Operators that aid in application deployment. Quay.io is a private container registry that not only stores container images, but also supports an entire ecosystem of tooling to aid in the management of containers. Quay.io strives to be as compatible as possible with the OCI 1.1 Image and Distribution specifications , and supports common media types like Helm charts (as long as they pushed with a version of Helm that supports OCI) and a variety of arbitrary media types within the manifest or layer components of container images. Support for OCI media types differs from iterations of Quay.io, when the registry was more strict about accepted media types. Because Quay.io now works with a wider array of media types, including those that were previously outside the scope of its support, it is now more versatile accommodating not only standard container image formats but also emerging or unconventional types. In addition to its expanded support for novel media types, Quay.io ensures compatibility with Docker images, including V2_2 and V2_1 formats. This compatibility with Docker V2_2 and V2_1 images demonstrates Quay.io's commitment to providing a seamless experience for Docker users. Moreover, Quay.io continues to extend its support for Docker V1 pulls, catering to users who might still rely on this earlier version of Docker images. Support for OCI artifacts are enabled by default. The following examples show you how to use some media types, which can be used as examples for using other OCI media types. 14.1. Helm and OCI prerequisites Helm simplifies how applications are packaged and deployed. Helm uses a packaging format called Charts which contain the Kubernetes resources representing an application. Quay.io supports Helm charts so long as they are a version supported by OCI. Use the following procedures to pre-configure your system to use Helm and other OCI media types. The most recent version of Helm can be downloaded from the Helm releases page. 14.2. Using Helm charts Use the following example to download and push an etherpad chart from the Red Hat Community of Practice (CoP) repository. Prerequisites You have logged into Quay.io. Procedure Add a chart repository by entering the following command: USD helm repo add redhat-cop https://redhat-cop.github.io/helm-charts Enter the following command to update the information of available charts locally from the chart repository: USD helm repo update Enter the following command to pull a chart from a repository: USD helm pull redhat-cop/etherpad --version=0.0.4 --untar Enter the following command to package the chart into a chart archive: USD helm package ./etherpad Example output Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz Log in to Quay.io using helm registry login : USD helm registry login quay.io Push the chart to your repository using the helm push command: helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm Example output: Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b Ensure that the push worked by deleting the local copy, and then pulling the chart from the repository: USD rm -rf etherpad-0.0.4.tgz USD helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4 Example output: Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902 14.3. Cosign OCI support Cosign is a tool that can be used to sign and verify container images. It uses the ECDSA-P256 signature algorithm and Red Hat's Simple Signing payload format to create public keys that are stored in PKIX files. Private keys are stored as encrypted PEM files. Cosign currently supports the following: Hardware and KMS Signing Bring-your-own PKI OIDC PKI Built-in binary transparency and timestamping service Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } 14.4. Installing and using Cosign Use the following procedure to directly install Cosign. Prerequisites You have installed Go version 1.16 or later. You have set FEATURE_GENERAL_OCI_SUPPORT to true in your config.yaml file. Procedure Enter the following go command to directly install Cosign: USD go install github.com/sigstore/cosign/cmd/[email protected] Example output go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0 Generate a key-value pair for Cosign by entering the following command: USD cosign generate-key-pair Example output Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub Sign the key-value pair by entering the following command: USD cosign sign -key cosign.key quay.io/user1/busybox:test Example output Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig If you experience the error: signing quay-server.example.com/user1/busybox:test: getting remote image: GET https://quay-server.example.com/v2/user1/busybox/manifests/test : UNAUTHORIZED: access to the requested resource is not authorized; map[] error, which occurs because Cosign relies on ~./docker/config.json for authorization, you might need to execute the following command: USD podman login --authfile ~/.docker/config.json quay.io Example output Username: Password: Login Succeeded! Enter the following command to see the updated authorization configuration: USD cat ~/.docker/config.json { "auths": { "quay-server.example.com": { "auth": "cXVheWFkbWluOnBhc3N3b3Jk" } } | [
"helm repo add redhat-cop https://redhat-cop.github.io/helm-charts",
"helm repo update",
"helm pull redhat-cop/etherpad --version=0.0.4 --untar",
"helm package ./etherpad",
"Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz",
"helm registry login quay.io",
"helm push etherpad-0.0.4.tgz oci://quay.io/<organization_name>/helm",
"Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b",
"rm -rf etherpad-0.0.4.tgz",
"helm pull oci://quay.io/<organization_name>/helm/etherpad --version 0.0.4",
"Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }",
"go install github.com/sigstore/cosign/cmd/[email protected]",
"go: downloading github.com/sigstore/cosign v1.0.0 go: downloading github.com/peterbourgon/ff/v3 v3.1.0",
"cosign generate-key-pair",
"Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub",
"cosign sign -key cosign.key quay.io/user1/busybox:test",
"Enter password for private key: Pushing signature to: quay-server.example.com/user1/busybox:sha256-ff13b8f6f289b92ec2913fa57c5dd0a874c3a7f8f149aabee50e3d01546473e3.sig",
"podman login --authfile ~/.docker/config.json quay.io",
"Username: Password: Login Succeeded!",
"cat ~/.docker/config.json { \"auths\": { \"quay-server.example.com\": { \"auth\": \"cXVheWFkbWluOnBhc3N3b3Jk\" } }"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/oci-intro |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes included in the latest OpenJDK 8 release of Eclipse Temurin, see OpenJDK 8u362 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 8.0.362 release: Improved CORBA communications By default, the CORBA implementation in OpenJDK 8.0.362 refuses to deserialize any objects that do not contain the IOR: prefix. If you want to revert to the behavior, you can set the new com.sun.CORBA.ORBAllowDeserializeObject property to true . See JDK-8285021 (JDK Bug System) . Enhanced BMP bounds By default, OpenJDK 8.0.362 disables loading a linked International Color Consortium (ICC) profile in a BMP image. You can enable this functionality by setting the new sun.imageio.bmp.enabledLinkedProfiles property to true . This property replaces the old sun.imageio.plugins.bmp.disableLinkedProfiles property See JDK-8295687 (JDK Bug System) . Improved banking of sounds Previously, the SoundbankReader implementation, com.sun.media.sound.JARSoundbankReader , downloaded a JAR soundbank from a URL. For OpenJDK 8.0.362, this behavior is now disabled by default. To re-enable the behavior, set the new system property jdk.sound.jarsoundbank to true . See JDK-8293742 (JDK Bug System) . OpenJDK support for Microsoft Windows 11 OpenJDK 8.0.362 can now recogize the Microsoft Windows 11 operating system, and can set the os.name property to Windows 11 . See JDK-8274840 (JDK Bug System). SHA-1 Signed JARs With the OpenJDK 8.0.362 release, JARs signed with SHA-1 algorithms are restricted by default and treated as if they were unsigned. These restrictions apply to the following algorithms: Algorithms used to digest, sign, and optionally timestamp the JAR. Signature and digest algorithms of the certificates in the certificate chain of the code signer and the Timestamp Authority, and any Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) responses that are used to verify if those certificates have been revoked. Additionally, the restrictions apply to signed Java Cryptography Extension (JCE) providers. To reduce the compatibility risk for JARs that have been previously timestamped, the restriction does not apply to any JAR signed with SHA-1 algorithms and timestamped prior to January 01, 2019 . This exception might be removed in a future OpenJDK release. To determine if your JAR file is impacted by the restriction, you can issue the following command in your CLI: From the output of the command, search for instance of SHA1 , SHA-1 , or disabled . Additionally, search for any warning messages that indicate that the JAR will be treated as unsigned. For example: Consider replacing or re-signing any JARs affected by the new restrictions with stronger algorithms. If your JAR file is impacted by this restriction, you can remove the algorithm and re-sign the file with a stronger algorithm, such as SHA-256 . If you want to remove the restriction on SHA-1 signed JARs for OpenJDK 8.0.362, and you accept the security risks, you can complete the following actions: Modify the java.security configuration file. Alternatively, you can preserve this file and instead create another file with the required configurations. Remove the SHA1 usage SignedJAR & denyAfter 2019 01 011 entry from the jdk.certpath.disabledAlgorithms security property. Remove the SHA1 denyAfter 2019-01-01 entry from the jdk.jar.disabledAlgorithms security property. Note The value of jdk.certpath.disabledAlgorithms in the java.security file might be overridden by the system security policy on RHEL 8 and 9. The values used by the system security policy can be seen in the file /etc/crypto-policies/back-ends/java.config and disabled by either setting security.useSystemPropertiesFile to false in the java.security file or passing -Djava.security.disableSystemPropertiesFile=true to the JVM. These values are not modified by this release, so the values remain the same for releases of OpenJDK. For an example of configuring the java.security file, see Overriding java.security properties for JBoss EAP for OpenShift (Red Hat Customer Portal). See JDK-8269039 (JDK Bug System). Revised on 2024-05-10 09:06:58 UTC | [
"jarsigner -verify -verbose -certs",
"Signed by \"CN=\"Signer\"\" Digest algorithm: SHA-1 (disabled) Signature algorithm: SHA1withRSA (disabled), 2048-bit key WARNING: The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled by the security property: jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024, DSA keySize < 1024, SHA1 denyAfter 2019-01-01"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.362_release_notes/openjdk-temurin-features-8.0.362_openjdk |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_oracle_cloud_data_into_cost_management/proc-providing-feedback-on-redhat-documentation |
Chapter 2. Configuring an Azure account | Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 2.5. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 2.5.1. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5.2. Required Azure permissions for installer-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating a private storage endpoint for the image registry Microsoft.Network/privateEndpoints/write Microsoft.Network/privateEndpoints/read Microsoft.Network/privateEndpoints/privateDnsZoneGroups/write Microsoft.Network/privateEndpoints/privateDnsZoneGroups/read Microsoft.Network/privateDnsZones/join/action Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action Example 2.10. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.11. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.13. Optional permissions for installing a cluster using the NatGateway outbound type Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.14. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.15. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.16. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.17. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.18. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.19. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.20. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.21. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.22. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.23. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.5.3. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 2.5.4. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources About the Cloud Credential Operator 2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options. | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id>",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/installing-azure-account |
Using content navigator | Using content navigator Red Hat Ansible Automation Platform 2.5 Develop content that is compatible with Ansible Automation Platform Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_content_navigator/index |
Chapter 4. Using AMQ Management Console | Chapter 4. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 4.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox and Chrome. For more information on supported browser versions, see AMQ 7 Supported Configurations . 4.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web path="web"> <binding uri="http://localhost:8161"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </binding> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web path="web"> <binding uri="http://0.0.0.0:8161"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker_instance_dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker_instance_dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 4.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 4.4.3, "Securing network access to AMQ Management Console" . 4.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 4.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. Figure 4.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker_instance_dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 4.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 4.4.1. Securing AMQ Management Console using Red Hat Single Sign-On Prerequisites Red Hat Single Sign-On 7.4 Procedure Configure Red Hat Single Sign-On: Navigate to the realm in Red Hat Single Sign-On that you want to use for securing AMQ Management Console. Each realm in Red Hat Single Sign-On includes a client named Broker . This client is not related to AMQ. Create a new client in Red Hat Single Sign-On, for example artemis-console . Navigate to the client settings page and set: Valid Redirect URIs to the AMQ Management Console URL followed by * , for example: Web Origins to the same value as Valid Redirect URIs . Red Hat Single Sign-On allows you enter + , indicating that allowed CORS origins includes the value for Valid Redirect URIs . Create a role for the client, for example guest . Make sure all users who require access to AMQ Management Console are assigned the above role, for example, using Red Hat Single Sign-On groups. Configure the AMQ Broker instance: Add the following to your <broker-instance-dir> /instances/broker0/etc/login.config file to configure AMQ Management Console to use Red Hat Single Sign-On: Adding this configuration sets up a JAAS principal and a requirement for a bearer token from Red Hat Single Sign-On. The connection to Red Hat Single Sign-On is defined in the keycloak-bearer-token.json file, as described in the step. Create a file <broker-instance-dir> /etc/keycloak-bearer-token.json with the following contents to specify the connection to Red Hat Single Sign-On used for the bearer token exchange: { "realm": " <realm-name> ", "resource": " <client-name> ", "auth-server-url": " <RHSSO-URL> /auth", "principal-attribute": "preferred_username", "use-resource-role-mappings": true, "ssl-required": "external", "confidential-port": 0 } <realm-name> the name of the realm in Red Hat Single Sign-On <client-name> the name of the client in Red Hat Single Sign-On <RHSSO-URL> the URL of Red Hat Single Sign-On Create a file <broker-instance-dir> /etc/keycloak-js-token.json with the following contents to specify the Red Hat Single Sign-On authentication endpoint: { "realm": "<realm-name>", "clientId": "<client-name>", "url": " <RHSSO-URL> /auth" } Configure the security settings by editing the the <broker-instance-dir> /etc/broker.xml file. For example, to allow users with the amq role consume messages and allow users with the guest role send messages, add the following: <security-setting match="Info"> <permission roles="amq" type="createDurableQueue"/> <permission roles="amq" type="deleteDurableQueue"/> <permission roles="amq" type="createNonDurableQueue"/> <permission roles="amq" type="deleteNonDurableQueue"/> <permission roles="guest" type="send"/> <permission roles="amq" type="consume"/> </security-setting> Run the AMQ Broker instance and validate AMQ Management Console configuration. 4.4.2. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Table 4.1. Methods to grant users access to AMQ Management Console Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 4.4.3. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker_instance_dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web path="web"> <binding uri="https://0.0.0.0:8161" keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> </binding> </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath=" <broker_instance_dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 4.4.4. Configuring AMQ Management Console to use certificate-based authentication You can configure AMQ Management Console to authenticate users by using certificates instead of passwords. Procedure Obtain certificates for the broker and clients from a trusted certificate authority or generate self-signed certificates. If you want to generate self-signed certificates, complete the following steps: Generate a self-signed certificate for the broker. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the certificate from the broker keystore, so that it can be shared with clients. USD keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt On the client, import the broker certificate into the client truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt On the client, generate a self-signed certificate for the client. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg "RSA" -keysize 2048 -dname "CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ" -ext bc=ca:false -ext eku=cA Export the client certificate from the client keystore to a file so that it can be added to the broker truststore. USD keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt Import the client certificate into the broker truststore. USD keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt Note On the broker machine, ensure that the keystore and truststore files are in a location that is accessible to the broker. In the <broker_instance_dir>/etc/bootstrap.xml file, update the web configuration to enable the HTTPS protocol and client authentication for the broker console. For example: ... <web path="web"> <binding uri="https://localhost:8161" keyStorePath="USD{artemis.instance}/etc/server-keystore.p12" keyStorePassword="password" clientAuth="true" trustStorePath="USD{artemis.instance}/etc/client-truststore.p12" trustStorePassword="password"> ... </binding> </web> ... binding uri Specify the https protocol to enable SSL and add a host name and port. keystorePath The path to the keystore where the broker certificate is installed. keystorePassword The password of the keystore where the broker certificate is installed. ClientAuth Set to true to configure the broker to require that each client presents a certificate when a client tries to connect to the broker console. trustStorePath If clients are using self-signed certificates, specify the path to the truststore where client certificates are installed. trustStorePassword If clients are using self-signed certificates, specify the password of the truststore where client certificates are installed . NOTE. You need to configure the trustStorePath and trustStorePassword properties only if clients are using self-signed certificates. Obtain the Subject Distinguished Names (DNs) from each client certificate so you can create a mapping between each client certificate and a broker user. Export each client certificate from the client's keystore file into a temporary file. For example: Print the contents of the exported certificate: The output is similar to that shown below: The Owner entry is the Subject DN. The format used to enter the Subject DN depends on your platform. The string above could also be represented as; Enable certificate-based authentication for the broker's console. Open the <broker_instance_dir> /etc/login.config configuration file. Add the certificate login module and reference the user and roles properties files. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user="artemis-users.properties" org.apache.activemq.jaas.textfiledn.role="artemis-roles.properties"; }; org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule The implementation class. org.apache.activemq.jaas.textfiledn.user Specifies the location of the user properties file relative to the directory that contains the login configuration file. org.apache.activemq.jaas.textfiledn.role Specifies the properties file that maps users to defined roles for the login module implementation. Note If you change the default name of the certificate login module configuration in the <broker_instance_dir> /etc/login.config file, you must update the value of the -dhawtio.realm argument in the <broker_instance_dir>/etc/artemis.profile file to match the new name. The default name is activemq . Open the <broker_instance_dir>/etc/artemis-users.properties file. Create a mapping between client certificates and broker users by adding the Subject DNS that you obtained from each client certificate to a broker user. For example: user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US In this example, the user1 broker user is mapped to the client certificate that has a Subject Distinguished Name of CN=user1,O=Progress,C=US Subject DN. After you create a mapping between a client certificate and a broker user, the broker can authenticate the user by using the certificate. Open the <broker_instance_dir>/etc/artemis-roles.properties file. Grant users permission to log in to the console by adding them to the role that is specified for the HAWTIO_ROLE variable in the <broker_instance_dir>/etc/artemis.profile file. The default value of the HAWTIO_ROLE variable is amq . For example: amq=user1, user2 Configure the following recommended security properties for the HTTPS protocol. Open the <broker_instance_dir>/etc/artemis.profile file. Set the hawtio.http.strictTransportSecurity property to allow only HTTPS requests to the AMQ Management Console and to convert any HTTP requests to HTTPS. For example: hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload Set the hawtio.http.publicKeyPins property to instruct the web browser to associate a specific cryptographic public key with the AMQ Management Console to decrease the risk of "man-in-the-middle" attacks using forged certificates. For example: hawtio.http.publicKeyPins = pin-sha256="..."; max-age=5184000; includeSubDomains 4.4.5. Configuring AMQ Management Console to handle X-forwarded headers If requests to AMQ Management Console are routed through a proxy server, you can configure the AMQ Broker embedded web server, which hosts AMQ Management Console, to handle X-Forwarded headers. By handling X-Forwarded headers, AMQ Management Console can receive header information that is otherwise altered or lost when a proxy is involved in the path of a request. For example, the proxy can expose AMQ Management Console using HTTPS, and the AMQ Management Console, which uses HTTP, can identify from the X-Forwarded header that the connection between the browser and the proxy uses HTTPS and switch to HTTPS to serve browser requests. Procedure Open the <broker_instance_dir> /etc/bootstrap.xml file. In the <web> element, add the customizer attribute with a value of org.eclipse.jetty.server.ForwardedRequestCustomizer . For example: <web path="web" customizer="org.eclipse.jetty.server.ForwardedRequestCustomizer"> .. </web> Save the bootstrap.xml file. Start or restart the broker by entering the following command: On Linux: <broker_instance_dir> /bin/artemis run On Windows: <broker_instance_dir> \bin\artemis-service.exe start 4.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 4.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 4.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 4.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 4.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 4.4. Broker diagram tab, including attributes 4.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 4.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 4.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 4.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 4.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 4.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 4.7. Send Message page By default messages are sent using the credentials that you used to log in to AMQ Management Console. If you want to use different credentials, clear the Use current logon user checkbox and specify values in the Username and Password fields, which are displayed after you clear the checkbox. If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 4.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 4.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 4.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 4.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 4.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 4.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 4.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 4.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 4.11. Send Message page for a queue By default messages are sent using the credentials that you used to log in to AMQ Management Console. If you want to use different credentials, clear the Use current logon user checkbox and specify values in the Username and Password fields, which are displayed after you clear the checkbox. If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 4.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 4.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 4.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button. | [
"<web path=\"web\"> <binding uri=\"http://localhost:8161\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </binding> </web>",
"<web path=\"web\"> <binding uri=\"http://0.0.0.0:8161\">",
"<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>",
"-Dhawtio.disableProxy=false",
"-Dhawtio.proxyWhitelist=192.168.0.51",
"http://192.168.0.49/console/jolokia",
"https://broker.example.com:8161/console/*",
"console { org.keycloak.adapters.jaas.BearerTokenLoginModule required keycloak-config-file=\"USD{artemis.instance}/etc/keycloak-bearer-token.json\" role-principal-class=org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal ; };",
"{ \"realm\": \" <realm-name> \", \"resource\": \" <client-name> \", \"auth-server-url\": \" <RHSSO-URL> /auth\", \"principal-attribute\": \"preferred_username\", \"use-resource-role-mappings\": true, \"ssl-required\": \"external\", \"confidential-port\": 0 }",
"{ \"realm\": \"<realm-name>\", \"clientId\": \"<client-name>\", \"url\": \" <RHSSO-URL> /auth\" }",
"<security-setting match=\"Info\"> <permission roles=\"amq\" type=\"createDurableQueue\"/> <permission roles=\"amq\" type=\"deleteDurableQueue\"/> <permission roles=\"amq\" type=\"createNonDurableQueue\"/> <permission roles=\"amq\" type=\"deleteNonDurableQueue\"/> <permission roles=\"guest\" type=\"send\"/> <permission roles=\"amq\" type=\"consume\"/> </security-setting>",
"<web path=\"web\"> <binding uri=\"https://0.0.0.0:8161\" keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </binding> </web>",
"keyStorePath=\" <broker_instance_dir> /etc/keystore.jks\"",
"keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Broker, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA",
"keytool -storetype pkcs12 -keystore broker-keystore.p12 -storepass securepass -alias client -exportcert -rfc > broker.crt",
"keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file broker.crt -noprompt",
"keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -keypass securepass -alias client -genkey -keyalg \"RSA\" -keysize 2048 -dname \"CN=ActiveMQ Client, OU=Artemis, O=ActiveMQ, L=AMQ, S=AMQ, C=AMQ\" -ext bc=ca:false -ext eku=cA",
"keytool -storetype pkcs12 -keystore client-keystore.p12 -storepass securepass -alias client -exportcert -rfc > client.crt",
"keytool -storetype pkcs12 -keystore client-truststore.p12 -storepass securepass -keypass securepass -importcert -alias client-ca -file client.crt -noprompt",
"<web path=\"web\"> <binding uri=\"https://localhost:8161\" keyStorePath=\"USD{artemis.instance}/etc/server-keystore.p12\" keyStorePassword=\"password\" clientAuth=\"true\" trustStorePath=\"USD{artemis.instance}/etc/client-truststore.p12\" trustStorePassword=\"password\"> </binding> </web>",
"keytool -export -file <file_name> -alias broker-localhost -keystore broker.ks -storepass <password>",
"keytool -printcert -file <file_name>",
"Owner: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Issuer: CN=AMQ Client, OU=Artemis, O=AMQ, L=AMQ, ST=AMQ, C=AMQ Serial number: 51461f5d Valid from: Sun Apr 17 12:20:14 IST 2022 until: Sat Jul 16 12:20:14 IST 2022 Certificate fingerprints: SHA1: EC:94:13:16:04:93:57:4F:FD:CA:AD:D8:32:68:A4:13:CC:EA:7A:67 SHA256: 85:7F:D5:4A:69:80:3B:5B:86:27:99:A7:97:B8:E4:E8:7D:6F:D1:53:08:D8:7A:BA:A7:0A:7A:96:F3:6B:98:81",
"Owner: `CN=localhost,\\ OU=broker,\\ O=Unknown,\\ L=Unknown,\\ ST=Unknown,\\ C=Unknown`",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user=\"artemis-users.properties\" org.apache.activemq.jaas.textfiledn.role=\"artemis-roles.properties\"; };",
"user1=CN=user1,O=Progress,C=US user2=CN=user2,O=Progress,C=US",
"amq=user1, user2",
"hawtio.http.strictTransportSecurity = max-age=31536000; includeSubDomains; preload",
"hawtio.http.publicKeyPins = pin-sha256=\"...\"; max-age=5184000; includeSubDomains",
"<web path=\"web\" customizer=\"org.eclipse.jetty.server.ForwardedRequestCustomizer\"> .. </web>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/managing_amq_broker/assembly-using-AMQ-console-managing |
4.318. tcp_wrappers | 4.318. tcp_wrappers 4.318.1. RHEA-2011:1676 - tcp_wrappers enhancement update Enhanced tcp_wrappers packages are now available for Red Hat Enterprise Linux 6. The tcp_wrappers packages provide small daemon programs which can monitor and filter incoming requests for systat, finger, FTP, telnet, rlogin, rsh, exec, tftp, talk and other network services. These packages also contain the libwrap library, which adds the same filtering capabilities to programs linked against it, such as to sshd among others. Enhancement BZ# 727287 Previously, the tcp_wrappers packages were compiled without the RELRO (read-only relocations) flag. Programs provided by this package and also programs built against the tcp_wrappers libraries were thus vulnerable to various attacks based on overwriting the ELF section of a program. To increase the security of tcp_wrappers programs and libraries, the tcp_wrappers spec file has been modified to use the "-Wl,-z,relro" flags when compiling the packages. As a result, the tcp_wrappers packages are now provided with partial RELRO protection. Users of tcp_wrappers are advised to upgrade to these updated packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tcp_wrappers |
Chapter 7. Using image streams with Kubernetes resources | Chapter 7. Using image streams with Kubernetes resources Image streams, being Red Hat OpenShift Service on AWS native resources, work with all native resources available in Red Hat OpenShift Service on AWS, such as Build or DeploymentConfigs resources. It is also possible to make them work with native Kubernetes resources, such as Job , ReplicationController , ReplicaSet or Kubernetes Deployment resources. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false | [
"oc set image-lookup mysql",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true",
"oc set image-lookup imagestream --list",
"oc set image-lookup deploy/mysql",
"apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql",
"oc set image-lookup deploy/mysql --enabled=false"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/images/using-imagestreams-with-kube-resources |
Preface | Preface You can integrate some public clouds and third-party applications with the Hybrid Cloud Console. For information about integrating public clouds, see Configuring cloud integrations for Red Hat services . You can integrate the Red Hat Hybrid Cloud Console with Splunk, ServiceNow, Slack, Event-Driven Ansible, Microsoft Teams, Google Chat, and more applications to route event-triggered notifications to those third-party applications. Integrating third-party applications expands the scope of notifications beyond emails and messages, so that you can view and manage Hybrid Cloud Console events from your preferred platform dashboard or communications tool. To learn more about notifications, see Configuring notifications on the Red Hat Hybrid Cloud Console . Prerequisites You have Organization Administrator or Notifications administrator permissions for the Hybrid Cloud Console. You have the required configuration permissions for each third-party application that you want to integrate with the Hybrid Cloud Console. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/pr01 |
Chapter 4. Configuring the JBoss EAP for OpenShift Image for Your Java Application | Chapter 4. Configuring the JBoss EAP for OpenShift Image for Your Java Application The JBoss EAP for OpenShift image is preconfigured for basic use with your Java applications. However, you can configure the JBoss EAP instance inside the image. The recommended method is to use the OpenShift S2I process, together with application template parameters and environment variables. Important Any configuration changes made on a running container will be lost when the container is restarted or terminated. This includes any configuration changes made using scripts that are included with a traditional JBoss EAP installation, for example add-user.sh or the management CLI. It is strongly recommended that you use the OpenShift S2I process, together with application template parameters and environment variables, to make any configuration changes to the JBoss EAP instance inside the JBoss EAP for OpenShift image. 4.1. How the JBoss EAP for OpenShift S2I Process Works Flowchart illustrating the S2I process for JBoss EAP: If a pom.xml file is present in the source code repository, the S2I builder image initiates a Maven build process. The Maven build uses the contents of USDMAVEN_ARGS . If a pom.xml file is not present in the source code repository, the S2I builder image initiates a binary type build. To add custom Maven arguments or options, use USDMAVEN_ARGS_APPEND . The USDMAVEN_ARGS_APPEND variable appends options to USDMAVEN_ARGS . By default, the OpenShift profile uses the Maven package goal, which includes system properties for skipping tests ( -DskipTests ) and enabling the Red Hat GA repository ( -Dcom.redhat.xpaas.repo ). The results of a successful Maven build are copied to the EAP_HOME /standalone/deployments/ directory inside the JBoss EAP for OpenShift image. This includes all JAR, WAR, and EAR files from the source repository specified by the USDARTIFACT_DIR environmental variable. The default value of ARTIFACT_DIR is the Maven target directory. Note To use Maven behind a proxy on JBoss EAP for OpenShift image, set the USDHTTP_PROXY_HOST and USDHTTP_PROXY_PORT environment variables. Optionally, you can also set the USDHTTP_PROXY_USERNAME , USDHTTP_PROXY_PASSWORD , and USDHTTP_PROXY_NONPROXYHOSTS variables. All files in the modules source repository directory are copied to the EAP_HOME /modules/ directory in the JBoss EAP for OpenShift image. All files in the configuration source repository directory are copied to the EAP_HOME /standalone/configuration/ directory in the JBoss EAP for OpenShift image. If you want to use a custom JBoss EAP configuration file, name the file standalone-openshift.xml . Additional Resources See Binary (local) source on the OpenShift 4.2 documentation for additional information on binary type builds. See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to use the custom Maven artifacts repository mirror. 4.2. Configuring JBoss EAP for OpenShift Using Environment Variables Using environment variables is the recommended method of configuring the JBoss EAP for OpenShift image. See the OpenShift documentation for instructions on specifying environment variables for application containers and build containers. For example, you can set the JBoss EAP instance's management username and password using environment variables when creating your OpenShift application: Available environment variables for the JBoss EAP for OpenShift image are listed in Reference Information . 4.2.1. JVM Memory Configuration The OpenShift EAP image has a mechanism to automatically calculate the default JVM memory settings based on the current environment, but you can also configure the JVM memory settings using environment variables. 4.2.1.1. JVM Default Memory Settings If a memory limit is defined for the current container, and the limit is lower than the total available memory, the default JVM memory settings are calculated automatically. Otherwise, the default JVM memory settings are the default defined in the standalone.conf file of the EAP version used as the base server for the image. The container memory limit is retrieved from the file /sys/fs/cgroup/memory/memory.limit_in_bytes . The total available memory is retrieved using the /proc/meminfo command. When memory settings are calculated automatically, the following formulas are used: Maximum heap size (-Xmx): fifty percent (50%) of user memory Initial heap size (-Xms): twenty-five percent (25%) of the calculated maximum heap size. For example, the defined memory limit is 1 GB, and this limit is lower than the total available memory reported by /proc/meminfo , then the memory settings will be: -Xms128m -Xmx512 You can use the following environment variables to modify the JVM settings calculated automatically. Note that these variables are only used when default memory size is calculated automatically (in other words, when a valid container memory limit is defined). JAVA_MAX_MEM_RATIO JAVA_INITIAL_MEM_RATIO JAVA_MAX_INITIAL_MEM You can disable automatic memory calculation by setting the value of the following two environment variables to 0. JAVA_INITIAL_MEM_RATIO JAVA_MAX_MEM_RATIO 4.2.1.2. JVM Garbage Collection Settings The EAP image for OpenShift includes settings for both garbage collection and garbage collection logging Garbage Collection Settings -XX:+UseParallelOldGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfMemoryError Garbage Collection Logging Settings for Java 8 (non-modular JVM) -verbose:gc -Xloggc:/opt/eap/standalone/log/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading Garbage Collection Logging Settings for Java 11 (modular JVM) -Xlog:gc*:file=/opt/eap/standalone/log/gc.log:time,uptimemillis:filecount=5,filesize=3M 4.2.1.3. Resource Limits in Default Settings If set, additional default settings are included in the image. -XX:ParallelGCThreads={core-limit} -Djava.util.concurrent.ForkJoinPool.common.parallelism={core-limit} -XX:CICompilerCount=2 The value of {core-limit} is defined using the JAVA_CORE_LIMIT environment variable, or by the CPU core limit imposed by the container. The value of CICompilerCount is always fixed as 2 . 4.2.1.4. JVM Environment Variables Use these environment variables to configure the JVM in the EAP for OpenShift image. Table 4.1. JVM Environment Variables Variable Name Example Default Value JVM Settings Description JAVA_OPTS -verbose:class No default Multiple JVM options to pass to the java command. Use JAVA_OPTS_APPEND to configure additional JVM settings. If you use JAVA_OPTS , some unconfigurable defaults are not added to the server JVM settings. You must explicitly add these settings. Using JAVA_OPTS disables certain settings added by default by the container scripts. Disabled settings include -XX:MetaspaceSize=96M -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=jdk.nashorn.api,com.sun.crypto.provider -Djava.awt.headless=true In addition, if automatic memory calculation is not enabled, the inital Java memory (-Xms) and maximum Java memory (-Xmx) are not defined. Add these defaults if you use JAVA_OPTS to configure additional settings. JAVA_OPTS_APPEND -Dsome.property=value No default Multiple User-specified Java options to append to generated options in JAVA_OPTS . JAVA_MAX_MEM_RATIO 50 50 -Xmx Use this variable when the -Xmx option is not specified in JAVA_OPTS . The value of this variable is used to calculate a default maximum heap memory size based on the restrictions of the container. If this variable is used in a container without a memory constraint, the variable has no effect. If this variable is used in a container that does have a memory constraint, the value of -Xmx is set to the specified ratio of the container's available memory. The default value, 50 means that 50% of the available memory is used as an upper boundary. To skip calculation of maximum memory, set the value of this variable to 0 . No -Xmx option will be added to JAVA_OPTS . JAVA_INITIAL_MEM_RATIO 25 25 -Xms Use this variable when the -Xms option is not specified in JAVA_OPTS . The value of this variable is used to calculate the default initial heap memory size based on the maximum heap memory. If this variable is used in a container without a memory constraint, the variable has no effect. If this variable is used in a container that does have a memory constraint, the value of -Xms is set to the specified ratio of the -Xmx memory. The default value, 25 means that 25% of the maximum memory is used as the initial heap size. To skip calculation of initial memory, set the value of this variable to 0 . No -Xms option will be added to JAVA_OPTS . JAVA_MAX_INITIAL_MEM 4096 4096 -Xms Use this variable when the -Xms option is not specified in JAVA_OPTS . The value of this variable is used to calculate the maximum size of the initial memory heap. The value is expressed in megabytes (MB). If this variable is used in a container without a memory constraint, the variable has no effect. If this variable is used in a container that does have a memory constraint, the value of -Xms is set to the value specified in the variable. The default value, 4096, specifies that the maximum initial heap will never be larger than 4096MB. JAVA_DIAGNOSTICS true false (disabled) The settings depend on the JDK used by the container. OpenJDK8: -XX:NativeMemoryTracking=summary -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UnlockDiagnosticVMOptions OpenJDK11: -Xlog:gc:utctime -XX:NativeMemoryTracking=summary Set the value of this variable to true to include diagnostic information in standard output when events occur. If this variable is defined as true in an environment where JAVA_DIAGNOSTICS has already been defined as true , diagnostics are still included. DEBUG true false -agentlib:jdwp=transport=dt_socket,address=USDDEBUG_PORT,server=y,suspend=n Enables remote debugging. DEBUG_PORT 8787 8787 -agentlib:jdwp=transport=dt_socket,address=USDDEBUG_PORT,server=y,suspend=n Specifies the port used for debugging. JAVA_CORE_LIMIT Undefined -XX:parallelGCThreads -Djava.util.concurrent.ForkJoinPool.common.parallelism -XX:CICompilerCount A user-defined limit on the number of cores. If the container reports a limit constraint, the value of the JVM settings is limited to the container core limit. The value of -XXCICompilerCount is always 2 . By default, this variable is undefined. In that case, if a limit is not defined on the container, the JVM settings are not set. GC_MIN_HEAP_FREE_RATIO 20 10 -XX:MinHeapFreeRatio Minimum percentage of heap free after garbage collection to avoid expansion. GC_MAX_HEAP_FREE_RATIO 40 20 -XX:MaxHeapFreeRatio Maximum percentage of heap free after garbage collection to avoid shrinking. GC_TIME_RATIO 4 4 -XX:GCTimeRatio Specifies the ratio of the time spent outside of garbage collection (for example, time spent in application execution) to the time spent in garbage collection. GC_ADAPTIVE_SIZE_POLICY_WEIGHT 90 90 -XX:AdaptiveSizePolicyWeight The weighting given to the current garbage collection time versus the garbage collection times. GC_METASPACE_SIZE 20 96 -XX:MetaspaceSize The initial metaspace size. GC_MAX_METASPACE_SIZE 100 256 -XX:MaxMetaspaceSize The maximum metaspace size. GC_CONTAINER_OPTIONS -XX:+UseG1GC -XX:-UseParallelOldGC -XX:-UseParallelOldGC Specifies the Java garbage collection to use. The value of the variable should be the JRE command-line options to specify the required garbage collection. The JRE command specified overrides the default. The following environment variables are deprecated: JAVA_OPTIONS : Use JAVA_OPTS . INITIAL_HEAP_PERCENT : Use JAVA_INITIAL_MEM_RATIO . CONTAINER_HEAP_PERCENT : Use JAVA_MAX_MEM_RATIO . 4.3. Build Extensions and Project Artifacts The JBoss EAP for OpenShift image extends database support in OpenShift using various artifacts. These artifacts are included in the built image through different mechanisms: S2I artifacts that are injected into the image during the S2I process. Runtime artifacts from environment files provided through the OpenShift Secret mechanism. Important Support for using the Red Hat-provided internal datasource drivers with the JBoss EAP for OpenShift image is now deprecated. Red Hat recommends that you use JDBC drivers obtained from your database vendor for your JBoss EAP applications. The following internal datasources are no longer provided with the JBoss EAP for OpenShift image: MySQL PostgreSQL For more information about installing drivers, see Modules, Drivers, and Generic Deployments . For more information on configuring JDBC drivers with JBoss EAP, see JDBC drivers in the JBoss EAP Configuration Guide . Note that you can also create a custom layer to install these drivers and datasources if you want to add them to a provisioned server. Additional Resources Capability Trimming in JBoss EAP for OpenShift 4.3.1. S2I Artifacts The S2I artifacts include modules, drivers, and additional generic deployments that provide the necessary configuration infrastructure required for the deployment. This configuration is built into the image during the S2I process so that only the datasources and associated resource adapters need to be configured at runtime. See Artifact Repository Mirrors for additional guidance on how to instruct the S2I process to utilize the custom Maven artifacts repository mirror. 4.3.1.1. Modules, Drivers, and Generic Deployments There are a few options for including these S2I artifacts in the JBoss EAP for OpenShift image: Include the artifact in the application source deployment directory. The artifact is downloaded during the build and injected into the image. This is similar to deploying an application on the JBoss EAP for OpenShift image. Include the CUSTOM_INSTALL_DIRECTORIES environment variable, a list of comma-separated list of directories used for installation and configuration of artifacts for the image during the S2I process. There are two methods for including this information in the S2I: An install.sh script in the nominated installation directory. The install script executes during the S2I process and operates with impunity. install.sh Script Example The install.sh script is responsible for customizing the base image using APIs provided by install-common.sh . install-common.sh contains functions that are used by the install.sh script to install and configure the modules, drivers, and generic deployments. Functions contained within install-common.sh : install_modules configure_drivers install_deployments Modules A module is a logical grouping of classes used for class loading and dependency management. Modules are defined in the EAP_HOME /modules/ directory of the application server. Each module exists as a subdirectory, for example EAP_HOME /modules/org/apache/ . Each module directory then contains a slot subdirectory, which defaults to main and contains the module.xml configuration file and any required JAR files. For more information about configuring module.xml files for MySQL and PostgreSQL JDBC drivers, see the Datasource Configuration Examples in the JBoss EAP Configuration Guide. Example module.xml File for PostgreSQL Datasource Example module.xml File for MySQL Connect/J 8 Datasource Note The ".Z" in mysql-connector-java-8.0.Z.jar indicates the version of the JAR file downloaded. The file can be renamed, but the name must match the name in the module.xml file. The install_modules function in install.sh copies the respective JAR files to the modules directory in JBoss EAP, along with the module.xml . Drivers Drivers are installed as modules. The driver is then configured in install.sh by the configure_drivers function, the configuration properties for which are defined in a runtime artifact environment file. Adding Datasource Drivers The MySQL and PostgreSQL datasources are no longer provided as pre-configured internal datasources. You can still install these drivers as modules; see the description in Modules, Drivers, and Generic Deployments . You can obtain these JDBC drivers from the database vendor for your JBoss EAP applications. Create a drivers.env file for each datasource to be installed. Example drivers.env File for MySQL Datasource Example drivers.env File for PostgreSQL Datasource For information about download locations for various drivers, such as MySQL or PostgreSQL, see JDBC Driver Download Locations in the Configuration Guide. Generic Deployments Deployable archive files, such as JARs, WARs, RARs, or EARs, can be deployed from an injected image using the install_deployments function supplied by the API in install-common.sh . If the CUSTOM_INSTALL_DIRECTORIES environment variable has been declared but no install.sh scripts are found in the custom installation directories, the following artifact directories will be copied to their respective destinations in the built image: modules/* copied to USDJBOSS_HOME/modules/ configuration/* copied to USDJBOSS_HOME/standalone/configuration deployments/* copied to USDJBOSS_HOME/standalone/deployments This is a basic configuration approach compared to the install.sh alternative, and requires the artifacts to be structured appropriately. 4.3.2. Runtime Artifacts 4.3.2.1. Datasources There are two types of datasources: Internal datasources. These datasources run on OpenShift, but are not available by default through the Red Hat Registry or in the OpenShift repository. Configuration of these datasources is provided by environment files added to OpenShift Secrets. External datasources. These datasources do not run on OpenShift. Configuration of external datasources is provided by environment files added to OpenShift Secrets. Note For more details about creating and configuring OpenShift Secrets, see Secrets . You can create the datasource environment file in a directory, such as a configuration directory of your source project. The following example shows the content of a sample datasource environment file: Example: Datasource Environment File The DB_SERVICE_PREFIX_MAPPING property is a comma-separated list of datasource property prefixes. These prefixes are then appended to all properties for that datasource. Multiple datasources can then be included in a single environment file. Alternatively, each datasource can be provided in separate environment files. Datasources contain two types of properties: connection pool-specific properties and database driver-specific properties. The connection pool-specific properties produce a connection to a datasource. Database driver-specific properties determine the driver for a datasource and are configured as a driver S2I artifact. In the above example, DS1 is the datasource prefix, CONNECTION_CHECKER specifies a connection checker class used to validate connections for a database, and EXCEPTION_SORTER specifies the exception sorter class used to detect fatal database connection exceptions. The datasources environment files are added to the OpenShift Secret for the project. These environment files are then called within the template using the ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files as shown below. 4.3.2.2. Resource Adapters Configuration of resource adapters is provided by environment files added to OpenShift Secrets. Table 4.2. Resource Adapter Properties Attribute Description PREFIX _ID The identifier of the resource adapter as specified in the server configuration file. PREFIX _ARCHIVE The resource adapter archive. PREFIX _MODULE_SLOT The slot subdirectory, which contains the module.xml configuration file and any required JAR files. PREFIX _MODULE_ID The JBoss Module ID where the object factory Java class can be loaded from. PREFIX _CONNECTION_CLASS The fully qualified class name of a managed connection factory or admin object. PREFIX _CONNECTION_JNDI The JNDI name for the connection factory. PREFIX _PROPERTY_ParentDirectory Directory where the data files are stored. PREFIX _PROPERTY_AllowParentPaths Set AllowParentPaths to false to disallow .. in paths. This prevents requesting files that are not contained in the parent directory. PREFIX _POOL_MAX_SIZE The maximum number of connections for a pool. No more connections will be created in each sub-pool. PREFIX _POOL_MIN_SIZE The minimum number of connections for a pool. PREFIX _POOL_PREFILL Specifies if the pool should be prefilled. Changing this value requires a server restart. PREFIX _POOL_FLUSH_STRATEGY How the pool should be flushed in case of an error. Valid values are: FailingConnectionOnly (default), IdleConnections , and EntirePool . The RESOURCE_ADAPTERS property is a comma-separated list of resource adapter property prefixes. These prefixes are then appended to all properties for that resource adapter. Multiple resource adapter can then be included in a single environment file. In the example below, MYRA is used as the prefix for a resource adapter. Alternatively, each resource adapter can be provided in separate environment files. Example: Resource Adapter Environment File The resource adapter environment files are added to the OpenShift Secret for the project namespace. These environment files are then called within the template using the ENV_FILES environment property, the value of which is a comma-separated list of fully qualified environment files as shown below. 4.4. Results of using JBoss EAP Templates for OpenShift When you use JBoss EAP templates to compile your application, two images might be generated. An intermediate image named [application name]-build-artifacts might be generated before the final image, [application name] , is created. You can remove the [application name]-build-artifacts image after your application has been deployed. 4.5. SSO Configuration of Red Hat JBoss Enterprise Application Platform for OpenShift Images In Red Hat JBoss Enterprise Application Platform for OpenShift images, SSO is configured to use the legacy security subsystem. The environmment variable SSO_FORCE_LEGACY_SECURITY is set to true in these images. If you want to use the elytron subsystem for SSO security, update the value of the SSO_FORCE_LEGACY_SECURITY environment variable to false . 4.6. Default Datasource The datasource ExampleDS is not available in JBoss EAP 7.4. Some quickstarts require this datasource: cmt thread-racing Applications developed by customers might also require the ExampleDS datasource. If you need the default datasource, use the GENERATE_DEFAULT_DATASOURCE environment variable to include it when provisioning a JBoss EAP server. | [
"new-app --template=eap74-basic-s2i -p IMAGE_STREAM_NAMESPACE=eap-demo -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts -p SOURCE_REPOSITORY_REF=7.4.x -p CONTEXT_DIR=kitchensink -e ADMIN_USERNAME=myspecialuser -e ADMIN_PASSWORD=myspecialp@ssw0rd",
"#!/bin/bash injected_dir=USD1 source /usr/local/s2i/install-common.sh install_deployments USD{injected_dir}/injected-deployments.war install_modules USD{injected_dir}/modules configure_drivers USD{injected_dir}/drivers.env",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.0\" name=\"org.postgresql\"> <resources> <resource-root path=\"postgresql-jdbc.jar\"/> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module xmlns=\"urn:jboss:module:1.0\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.Z.jar\" /> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"#DRIVER DRIVERS=MYSQL MYSQL_DRIVER_NAME=mysql MYSQL_DRIVER_MODULE=org.mysql MYSQL_DRIVER_CLASS=com.mysql.cj.jdbc.Driver MYSQL_XA_DATASOURCE_CLASS=com.mysql.cj.jdbc.MysqlXADataSource",
"#DRIVER DRIVERS=POSTGRES POSTGRES_DRIVER_NAME=postgresql POSTGRES_DRIVER_MODULE=org.postgresql POSTGRES_DRIVER_CLASS=org.postgresql.Driver POSTGRES_XA_DATASOURCE_CLASS=org.postgresql.xa.PGXADataSource",
"DB_SERVICE_PREFIX_MAPPING=PostgresXA-POSTGRES=DS1 DS1_JNDI=java:jboss/datasources/pgds DS1_DRIVER=postgresql-42.2.5.jar DS1_USERNAME=postgres DS1_PASSWORD=postgres DS1_MAX_POOL_SIZE=20 DS1_MIN_POOL_SIZE=20 DS1_CONNECTION_CHECKER=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker DS1_EXCEPTION_SORTER=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter",
"{ \"Name\": \"ENV_FILES\", \"Value\": \"/etc/extensions/datasources1.env,/etc/extensions/datasources2.env\" }",
"#RESOURCE_ADAPTER RESOURCE_ADAPTERS=MYRA MYRA_ID=myra MYRA_ARCHIVE=myra.rar MYRA_CONNECTION_CLASS=org.javaee7.jca.connector.simple.connector.outbound.MyManagedConnectionFactory MYRA_CONNECTION_JNDI=java:/eis/MySimpleMFC",
"{ \"Name\": \"ENV_FILES\", \"Value\": \"/etc/extensions/resourceadapter1.env,/etc/extensions/resourceadapter2.env\" }",
"ENABLE_GENERATE_DEFAULT_DATASOURCE=true"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/configuring_eap_openshift_image |
Chapter 6. Kernel | Chapter 6. Kernel Support for Ceph Block Devices The libceph.ko and rbd.ko modules have been added to the Red Hat Enterprise Linux 7.1 kernel. These RBD kernel modules allow a Linux host to see a Ceph block device as a regular disk device entry which can be mounted to a directory and formatted with a standard file system, such as XFS or ext4 . Note that the CephFS module, ceph.ko , is currently not supported in Red Hat Enterprise Linux 7.1. Concurrent Flash MCL Updates Microcode level upgrades (MCL) are enabled in Red Hat Enterprise Linux 7.1 on the IBM System z architecture. These upgrades can be applied without impacting I/O operations to the flash storage media and notify users of the changed flash hardware service level. Dynamic kernel Patching Red Hat Enterprise Linux 7.1 introduces kpatch , a dynamic "kernel patching utility", as a Technology Preview. The kpatch utility allows users to manage a collection of binary kernel patches which can be used to dynamically patch the kernel without rebooting. Note that kpatch is supported to run only on AMD64 and Intel 64 architectures. Crashkernel with More than 1 CPU Red Hat Enterprise Linux 7.1 enables booting crashkernel with more than one CPU. This function is supported as a Technology Preview. dm-era Target Red Hat Enterprise Linux 7.1 introduces the dm-era device-mapper target as a Technology Preview. dm-era keeps track of which blocks were written within a user-defined period of time called an "era". Each era target instance maintains the current era as a monotonically increasing 32-bit counter. This target enables backup software to track which blocks have changed since the last backup. It also enables partial invalidation of the contents of a cache to restore cache coherency after rolling back to a vendor snapshot. The dm-era target is primarily expected to be paired with the dm-cache target. Cisco VIC kernel Driver The Cisco VIC Infiniband kernel driver has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. This driver allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. Enhanced Entropy Management in hwrng The paravirtualized hardware RNG (hwrng) support for Linux guests via virtio-rng has been enhanced in Red Hat Enterprise Linux 7.1. Previously, the rngd daemon needed to be started inside the guest and directed to the guest kernel's entropy pool. Since Red Hat Enterprise Linux 7.1, the manual step has been removed. A new khwrngd thread fetches entropy from the virtio-rng device if the guest entropy falls below a specific level. Making this process transparent helps all Red Hat Enterprise Linux guests in utilizing the improved security benefits of having the paravirtualized hardware RNG provided by KVM hosts. Scheduler Load-Balancing Performance Improvement Previously, the scheduler load-balancing code balanced for all idle CPUs. In Red Hat Enterprise Linux 7.1, idle load balancing on behalf of an idle CPU is done only when the CPU is due for load balancing. This new behavior reduces the load-balancing rate on non-idle CPUs and therefore the amount of unnecessary work done by the scheduler, which improves its performance. Improved newidle Balance in Scheduler The behavior of the scheduler has been modified to stop searching for tasks in the newidle balance code if there are runnable tasks, which leads to better performance. HugeTLB Supports Per-Node 1GB Huge Page Allocation Red Hat Enterprise Linux 7.1 has added support for gigantic page allocation at runtime, which allows the user of 1GB hugetlbfs to specify which Non-Uniform Memory Access (NUMA) Node the 1GB should be allocated on during runtime. New MCS-based Locking Mechanism Red Hat Enterprise Linux 7.1 introduces a new locking mechanism, MCS locks. This new locking mechanism significantly reduces spinlock overhead in large systems, which makes spinlocks generally more efficient in Red Hat Enterprise Linux 7.1. Process Stack Size Increased from 8KB to 16KB Since Red Hat Enterprise Linux 7.1, the kernel process stack size has been increased from 8KB to 16KB to help large processes that use stack space. uprobe and uretprobe Features Enabled in perf and systemtap In Red Hat Enterprise Linux 7.1, the uprobe and uretprobe features work correctly with the perf command and the systemtap script. End-To-End Data Consistency Checking End-To-End data consistency checking on IBM System z is fully supported in Red Hat Enterprise Linux 7.1. This enhances data integrity and more effectively prevents data corruption as well as data loss. DRBG on 32-Bit Systems In Red Hat Enterprise Linux 7.1, the deterministic random bit generator (DRBG) has been updated to work on 32-bit systems. NFSoRDMA Available As a Technology Preview, the NFSoRDMA service has been enabled for Red Hat Enterprise Linux 7.1. This makes the svcrdma module available for users who intend to use Remote Direct Memory Access (RDMA) transport with the Red Hat Enterprise Linux 7 NFS server. Support for Large Crashkernel Sizes The Kdump kernel crash dumping mechanism on systems with large memory, that is up to the Red Hat Enterprise Linux 7.1 maximum memory supported limit of 6TB, has become fully supported in Red Hat Enterprise Linux 7.1. Kdump Supported on Secure Boot Machines With Red Hat Enterprise Linux 7.1, the Kdump crash dumping mechanism is supported on machines with enabled Secure Boot. Firmware-assisted Crash Dumping Red Hat Enterprise Linux 7.1 introduces support for firmware-assisted dump (fadump), which provides an alternative crash dumping tool to kdump. The firmware-assisted feature provides a mechanism to release the reserved dump memory for general use once the crash dump is saved to the disk. This avoids the need to reboot the system after performing the dump, and thus reduces the system downtime. In addition, fadump uses of the kdump infrastructure already present in the user space, and works seamlessly with the existing kdump init scripts. Runtime Instrumentation for IBM System z As a Technology Preview, support for the Runtime Instrumentation feature has been added for Red Hat Enterprise Linux 7.1 on IBM System z. Runtime Instrumentation enables advanced analysis and execution for a number of user-space applications available with the IBM zEnterprise EC12 system. Cisco usNIC Driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. As a Technology Preview, Red Hat Enterprise Linux 7.1 includes the libusnic_verbs driver, which makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API. Intel Ethernet Server Adapter X710/XL710 Driver Update The i40e and i40evf kernel drivers have been updated to their latest upstream versions. These updated drivers are included as a Technology Preview in Red Hat Enterprise Linux 7.1. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Kernel |
Configure Red Hat Quay | Configure Red Hat Quay Red Hat Quay 3.13 Customizing Red Hat Quay using configuration options Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/configure_red_hat_quay/index |
Chapter 2. User-provisioned infrastructure | Chapter 2. User-provisioned infrastructure 2.1. Installation requirements for IBM Z and IBM LinuxONE infrastructure Before you begin an installation on IBM Z(R) infrastructure, be sure that your IBM Z(R) environment meets the following installation requirements. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 2.1.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different hypervisor instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.1.1.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources See Bridging a HiperSockets LAN with a z/VM Virtual Switch in IBM(R) Documentation. See Scaling HyperPAV alias devices on Linux guests on z/VM for performance optimization. Processors Resource/Systems Manager Planning Guide in IBM(R) Documentation for PR/SM mode considerations. IBM Dynamic Partition Manager (DPM) Guide in IBM(R) Documentation for DPM mode considerations. Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 2.1.1.2. Minimum IBM Z system environment The following IBM(R) hardware is supported with OpenShift Container Platform version 4.18. Table 2.3. Supported IBM(R) hardware z/VM LPAR [1] RHEL KVM [2] IBM(R) z16 (all models) supported supported supported IBM(R) z15 (all models) supported supported supported IBM(R) z14 (all models) supported supported supported IBM(R) LinuxONE 4 (all models) supported supported supported IBM(R) LinuxONE III (all models) supported supported supported IBM(R) LinuxONE Emperor II supported supported supported IBM(R) LinuxONE Rockhopper II supported supported supported When running OpenShift Container Platform on IBM Z(R) without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. The RHEL KVM host in your environment must meet certain requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Important You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust the capacity correctly on each hypervisor layer and ensure that there are sufficient resources for every OpenShift Container Platform cluster. Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. For more information, see "Recommended host practices for IBM Z & IBM LinuxONE environments". IBM Z operating system requirements Table 2.4. Operating system requirements z/VM LPAR RHEL KVM Hypervisor One instance of z/VM 7.2 or later IBM(R) z14 or later with DPM or PR/SM One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt OpenShift Container Platform control plane machines Three guest virtual machines Three LPARs Three guest virtual machines OpenShift Container Platform compute machines Two guest virtual machines Two LPARs Two guest virtual machines Temporary OpenShift Container Platform bootstrap machine One machine One machine One machine IBM Z network connectivity Table 2.5. Network connectivity requirements z/VM LPAR RHEL KVM Network Interface Card (NIC) One single z/VM virtual NIC in layer 2 mode - - Virtual switch (vSwitch) z/VM VSWITCH in layer 2 Ethernet mode - - Network adapter Direct-attached OSA, RoCE, or HiperSockets Direct-attached OSA, RoCE, or HiperSockets A RHEL KVM host configured with OSA, RoCE, or HiperSockets Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . Disk storage Table 2.6. Disk storage requirements z/VM LPAR RHEL KVM Fibre Connection (FICON) z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. Dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. Virtual block device Fibre Channel Protocol (FCP) Dedicated FCP or EDEV Dedicated FCP or EDEV Virtual block device QCOW Not supported Not supported Supported NVMe Not supported Supported Virtual block device 2.1.1.3. Preferred IBM Z system environment The preferred system environment for running OpenShift Container Platform version 4.18 on IBM Z(R) hardware is as follows: Hardware requirements Three logical partitions (LPARs) that each have the equivalent of six Integrated Facilities for Linux (IFLs), which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Note When installing in a z/VM environment, you can also bridge HiperSockets with one z/VM VSWITCH to be transparent to the z/VM guest. IBM Z operating system requirements Table 2.7. Operating system requirements z/VM [1] LPAR RHEL KVM Hypervisor One instance of z/VM 7.2 or later IBM(R) z14 or later with DPM or PR/S One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt OpenShift Container Platform control plane machines Three guest virtual machines Three LPARs Three guest virtual machines OpenShift Container Platform compute machines Six guest virtual machines Six LPARs Six guest virtual machines Temporary OpenShift Container Platform bootstrap machine One machine One machine One machine To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command SET SHARE . Do the same for infrastructure nodes, if they exist. See SET SHARE (IBM(R) Documentation). Additional resources Optimizing storage 2.1.1.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.1.1.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.1.1.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.1.1.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.8. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.9. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.10. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.1.1.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.11. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.1.1.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.1.1.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.12. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.13. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.1.1.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.2. Preparing to install a cluster on IBM Z and IBM LinuxONE using user-provisioned infrastructure You prepare to install an OpenShift Container Platform cluster on IBM Z(R) and IBM(R) LinuxONE by completing the following steps: Verifying internet connectivity for your cluster. Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Validating DNS resolution. 2.2.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.2.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.2.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.2.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources See About remote health monitoring for more information about the Telemetry service. 2.3. Installing a cluster with z/VM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.18, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.3.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.3.2. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.3.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.3.3.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.3.3.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.3.3.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.3.4. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.3.4.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.14. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.15. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.16. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.17. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.18. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.19. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.20. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.21. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.22. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.23. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.3.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.3.6. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.3.7. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Booting the installation on IBM Z(R) to install RHEL in z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.3.7.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.3.7.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.3.8. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.3.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.3.10. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.3.11. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.3.11.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.3.11.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.3.11.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.3.12. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. 2.3.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.3.14. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . 2.4. Installing a cluster with z/VM on IBM Z and IBM LinuxONE in a disconnected environment In OpenShift Container Platform version 4.18, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a disconnected environment. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.4.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.4.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 2.4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 2.4.3. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.4.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.4.4.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 2.4.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.4.4.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.4.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.4.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.24. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.25. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.26. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.27. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.28. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.29. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.30. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.31. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.32. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.33. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.4.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.4.7. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.4.8. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on z/VM guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS z/VM guest virtual machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to z/VM, for example with FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Booting the installation on IBM Z(R) to install RHEL in z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. See PUNCH in IBM Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader: See IPL in IBM Documentation. Repeat this procedure for the other machines in the cluster. 2.4.8.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.4.8.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.4.9. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.4.11. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.4.12. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.4.12.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 2.4.12.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.4.12.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.4.12.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.4.13. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.4.14. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster 2.5. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.18, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.5.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 2.5.2. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.5.3.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.5.3.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.5.3.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.5.4. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.5.4.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.34. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.35. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.36. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.37. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.38. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.39. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.40. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.41. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.42. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.43. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.5.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.5.6. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 2.5.6.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 2.5.6.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ zfcp.allow_lun_scan=0 1 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.5.6.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --disk <disk> \ --launchSecurity type="s390-pv" \ 1 --import \ --network network=<virt_network_parm>,mac=<mac_address> \ --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 2.5.6.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native \ --network network=<virt_network_parm> \ --boot hd \ --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/<block_device>" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/bootstrap.ign" \ 2 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 3 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>" \ --noautoconsole \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2.5.6.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.5.6.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 2.5.7. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.5.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.5.9. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.5.10. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.5.10.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.5.10.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.5.10.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.5.11. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.5.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 2.6. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE in a disconnected environment In OpenShift Container Platform version 4.18, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a disconnected environment. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.6.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. You must move or remove any existing installation files, before you begin the installation process. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 2.6.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 2.6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 2.6.3. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.6.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.6.4.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 2.6.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.6.4.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.6.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.6.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.44. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.45. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.46. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.47. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.48. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.49. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.50. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.51. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.52. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.53. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.6.7. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 2.6.7.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 2.6.7.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ zfcp.allow_lun_scan=0 1 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.6.7.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --disk <disk> \ --launchSecurity type="s390-pv" \ 1 --import \ --network network=<virt_network_parm>,mac=<mac_address> \ --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 2.6.7.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native \ --network network=<virt_network_parm> \ --boot hd \ --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/<block_device>" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/bootstrap.ign" \ 2 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 3 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>" \ --noautoconsole \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2.6.7.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.6.7.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 2.6.8. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.6.10. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.6.11. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.6.11.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 2.6.11.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.6.11.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.6.11.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.6.12. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.6.13. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster 2.7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.18, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.7.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.7.2. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.7.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.7.3.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.7.3.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.7.3.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.7.4. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.7.4.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.54. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.55. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.56. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.57. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.58. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.59. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.60. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.61. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.62. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.63. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.7.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.7.6. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . For installations on NVMe-type disks, specify /dev/nvme0n1 . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.7.7. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . For installations on NVMe-type disks, specify /dev/nvme0n1 . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 2.7.7.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.7.7.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.7.8. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.7.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.7.10. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.7.11. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.7.11.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.7.11.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.7.11.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.7.12. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. List the re-IPL configuration by running the following command: # lsreipl Example output for an FCP disk Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: "" Bootparms: "" clear: 0 Example output for a DASD disk for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: "" clear: 0 Shut down the node by running the following command: sudo shutdown -h Initiate a boot from LPAR from the Hardware Management Console (HMC). See Initiating a secure boot from an LPAR in IBM documentation. When the node is back, check the secure boot status again. 2.7.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.7.14. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . 2.8. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a disconnected environment In OpenShift Container Platform version 4.18, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a disconnected environment. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 2.8.1. Prerequisites You have completed the tasks in Preparing to install a cluster on IBM Z using user-provisioned infrastructure . You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.8.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 2.8.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 2.8.3. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.8.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 2.8.4.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 2.8.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.8.4.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.8.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.8.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.64. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.65. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.66. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.67. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.68. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.69. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.70. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.71. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.72. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 2.73. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 2.8.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.8.7. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . For installations on NVMe-type disks, specify /dev/nvme0n1 . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 2.8.8. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . For installations on NVMe-type disks, specify /dev/nvme0n1 . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 2.8.8.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.8.8.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.8.9. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.8.11. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 2.8.12. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 2.8.12.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 2.8.12.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.8.12.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.8.12.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.8.13. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. List the re-IPL configuration by running the following command: # lsreipl Example output for an FCP disk Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: "" Bootparms: "" clear: 0 Example output for a DASD disk for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: "" clear: 0 Shut down the node by running the following command: sudo shutdown -h Initiate a boot from LPAR from the Hardware Management Console (HMC). See Initiating a secure boot from an LPAR in IBM documentation. When the node is back, check the secure boot status again. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH 2.8.14. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_z_and_ibm_linuxone/user-provisioned-infrastructure |
Chapter 99. KafkaTopicSpec schema reference | Chapter 99. KafkaTopicSpec schema reference Used in: KafkaTopic Property Property type Description partitions integer The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. When absent this will default to the broker configuration for num.partitions . replicas integer The number of replicas the topic should have. When absent this will default to the broker configuration for default.replication.factor . config map The topic configuration. topicName string The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkatopicspec-reference |
Chapter 7. Configuring the system and running tests by using RHCert CLI Tool | Chapter 7. Configuring the system and running tests by using RHCert CLI Tool To run the certification tests by using CLI you need to upload the test plan to the HUT first. After running the tests, download the results and review them. This chapter contains the following topics: Section 7.1, "Using the test plan to prepare the host under test for testing" Section 7.2, "Running the certification tests using CLI" Section 7.3, "Submitting the test results file" 7.1. Using the test plan to prepare the host under test for testing Running the provision command performs a number of operations, such as installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages will be installed if the test plan is designed for certifying a hardware product. Procedure Run the provision command in either way. The test plan will automatically get downloaded to your system. If you have already downloaded the test plan: Replace <path_to_test_plan_document> with the test plan file saved on your system. Follow the on-screen instructions. If you have not downloaded the test plan: Follow the on-screen instructions and enter your Certification ID when prompted. 7.2. Running the certification tests using CLI Procedure Run the following command: When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . Note After a test reboot, rhcert is running in the background to verify the image. Use tail -f / var /log/rhcert/RedHatCertDaemon.log to see the current progress and status of the verification. 7.3. Submitting the test results file Procedure Log in to authenticate your device. Note Logging in is mandatory to submit the test results file. Open the generated URL in a new browser window or tab. Enter the login and password and click Log in . Click Grant access . Device log in successful message displays. Return to the terminal and enter yes to the Please confirm once you grant access prompt. Submit the result file. When prompted, enter your Certification ID. | [
"rhcert-provision _<path_to_test_plan_document>_",
"rhcert-provision",
"rhcert-run",
"rhcert-cli login",
"rhcert-submit"
]
| https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_workflow_guide/assembly_cloud-wf-configuring-system-and-running-tests-by-using-cli_cloud-wf-configure-systems-using-cockpit |
Chapter 3. Filtering systems based on severity | Chapter 3. Filtering systems based on severity You can filter all your systems in a report based on the severity of the rules that failed. This enables you to triage your systems and remediate the most critical issues first. Prerequisites Login access to the Red Hat Hybrid Cloud Console. Procedure Navigate to Security > Compliance > Reports . Choose the Policy rule you want to view. Click the Name filter above your list of systems. Choose Failed rule severity . Click on Filter by failed rule to the right of Failed rule severity , and place a checkmark in the box to the left of High . The systems that remain are those deemed to be critical issues, and should be the first ones remediated. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-compl-filtering-based-on-severity |
12.3. Restrictions and Limitations | 12.3. Restrictions and Limitations It is strongly recommended to run Red Hat Enterprise Linux 7.2 or later in the L0 host and the L1 guests. L2 guests can contain any guest system supported by Red Hat. It is not supported to migrate L1 or L2 guests. Use of L2 guests as hypervisors and creating L3 guests is not supported. Not all features available on the host are available to be utilized by the L1 hypervisor. For instance, IOMMU/VT-d or APICv cannot be used by the L1 hypervisor. To use nested virtualization, the host CPU must have the necessary feature flags. To determine if the L0 and L1 hypervisors are set up correctly, use the cat /proc/cpuinfo command on both L0 and L1, and make sure that the following flags are listed for the respective CPUs on both hypervisors: For Intel - vmx (Hardware Virtualization) and ept (Extended Page Tables) For AMD - svm (equivalent to vmx) and npt (equivalent to ept) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-nested_virt_restrictions |
Part II. Integrating a Linux Domain with an Active Directory Domain: Cross-forest Trust | Part II. Integrating a Linux Domain with an Active Directory Domain: Cross-forest Trust This part provides recommended practices for integrating a Linux Domain with an Active Directory domain by creating, configuring, and managing a cross-forest trust environment. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/trust |
Chapter 31. Adding a Storage Device or Path | Chapter 31. Adding a Storage Device or Path When adding a device, be aware that the path-based device name ( /dev/sd name, major:minor number, and /dev/disk/by-path name, for example) the system assigns to the new device may have been previously in use by a device that has since been removed. As such, ensure that all old references to the path-based device name have been removed. Otherwise, the new device may be mistaken for the old device. Procedure 31.1. Add a storage device or path The first step in adding a storage device or path is to physically enable access to the new storage device, or a new path to an existing device. This is done using vendor-specific commands at the Fibre Channel or iSCSI storage server. When doing so, note the LUN value for the new storage that will be presented to your host. If the storage server is Fibre Channel, also take note of the World Wide Node Name (WWNN) of the storage server, and determine whether there is a single WWNN for all ports on the storage server. If this is not the case, note the World Wide Port Name (WWPN) for each port that will be used to access the new LUN. , make the operating system aware of the new storage device, or path to an existing device. The recommended command to use is: In the command, h is the HBA number, c is the channel on the HBA, t is the SCSI target ID, and l is the LUN. Note The older form of this command, echo "scsi add-single-device 0 0 0 0" > /proc/scsi/scsi , is deprecated. In some Fibre Channel hardware, a newly created LUN on the RAID array may not be visible to the operating system until a Loop Initialization Protocol (LIP) operation is performed. Refer to Chapter 34, Scanning Storage Interconnects for instructions on how to do this. Important It will be necessary to stop I/O while this operation is executed if an LIP is required. If a new LUN has been added on the RAID array but is still not being configured by the operating system, confirm the list of LUNs being exported by the array using the sg_luns command, part of the sg3_utils package. This will issue the SCSI REPORT LUNS command to the RAID array and return a list of LUNs that are present. For Fibre Channel storage servers that implement a single WWNN for all ports, you can determine the correct h , c ,and t values (i.e. HBA number, HBA channel, and SCSI target ID) by searching for the WWNN in sysfs . Example 31.1. Determin correct h , c , and t values For example, if the WWNN of the storage server is 0x5006016090203181 , use: This should display output similar to the following: This indicates there are four Fibre Channel routes to this target (two single-channel HBAs, each leading to two storage ports). Assuming a LUN value is 56 , then the following command will configure the first path: This must be done for each path to the new device. For Fibre Channel storage servers that do not implement a single WWNN for all ports, you can determine the correct HBA number, HBA channel, and SCSI target ID by searching for each of the WWPNs in sysfs . Another way to determine the HBA number, HBA channel, and SCSI target ID is to refer to another device that is already configured on the same path as the new device. This can be done with various commands, such as lsscsi , scsi_id , multipath -l , and ls -l /dev/disk/by-* . This information, plus the LUN number of the new device, can be used as shown above to probe and configure that path to the new device. After adding all the SCSI paths to the device, execute the multipath command, and check to see that the device has been properly configured. At this point, the device can be added to md , LVM, mkfs , or mount , for example. If the steps above are followed, then a device can safely be added to a running system. It is not necessary to stop I/O to other devices while this is done. Other procedures involving a rescan (or a reset) of the SCSI bus, which cause the operating system to update its state to reflect the current device connectivity, are not recommended while storage I/O is in progress. | [
"echo \" c t l \" > /sys/class/scsi_host/host h /scan",
"grep 5006016090203181 /sys/class/fc_transport/*/node_name",
"/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181 /sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181 /sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181 /sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181",
"echo \"0 2 56\" > /sys/class/scsi_host/host5/scan"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/adding_storage-device-or-path |
Creating and managing instances | Creating and managing instances Red Hat OpenStack Services on OpenShift 18.0 Creating and managing instances using the CLI OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/index |
Chapter 28. Installation and Booting | Chapter 28. Installation and Booting Automatic partitioning now works when installing on a single FBA DASD on IBM z Series Previously, when installing Red Hat Enterprise Linux 7 on IBM z Series systems with a single Fixed Block Architecture (FBA) Direct Access Storage Device (DASD) with the cms disk layout as the target, automatic partitioning failed because the installer attempted to create multiple partitions on the device, which is not supported on cms-formatted FBA DASDs. This caused the installation finish with a corrupted disk. With this update, the installer first creates a msdos partition table on the target DASD, which allows up to three partitions on the device. As long as the installer only creates three or fewer partitions, the installation will succeed. Note that it is recommended to use the autopart --nohome Kickstart option to ensure that the installer does not create a separate /home partition. (BZ#1214407) Activation of bridge configured in Kickstart no longer fails when Kickstart proceeds from the disk Previously, if the bridge device was configured in a Kickstart file and the Kickstart file was fetched from the disk, the lack of network connection meant that the bridge was not created and the installation failed at an early stage. With this update, the bridge Kickstart configuration is passed to the dracut tool at an early stage. As a result, dracut can create and activate the bridge device even when no network is required at the early stage of installation. (BZ#1373360) Anaconda now correctly allows creating users without passwords Previously, it was not possible to deselect the Require a password to use this account option in the Create User screen during an interactive installation. As a consequence, all user accounts created during the installation required a password. This bug has been fixed, and creating users with no password is now possible. (BZ#1380277) Minimal installation no longer installs open-vm-tools-desktop and dependencies The open-vm-tools-desktop package was previously marked as default in the @platform-vmware package group (Virtualization utilities and drivers for VMWare). This group is automatically installed by Anaconda when it detects that the installation is using a VMWare hypervisor. At the same time, this package has many dependencies including a large number of X libraries which are not useful in a minimal installation, and this was causing Anaconda to install a high number of unnecessary packages. The open-vm-tools-desktop package is now optional in the @platform-vmware group, and therefore not being installed by default. The other package in the group, open-vm-tools , remains mandatory and is therefore installed by default. (BZ# 1408694 ) Anaconda no longer generates invalid Kickstart files Previously, if a Kickstart file was used during an installation which defined some LVM logical volumes absolutely (the --size= parameter) and others relatively (the --percent= parameter), the resulting Kickstart file which is saved on the installed system, anaconda-ks.cfg , defined all logical volumes using both of these parameters. These parameters are mutually exclusive, and the generated Kickstart file was therefore invalid. With this update, Anaconda correctly handles usage of relative and absolute sizes, and the resulting post-installation Kickstart files are valid. (BZ#1317370) Anaconda no longer fails to identify RAID arrays specified by name Previously, when a RAID array was specified by name in the ignoredisk or clearpart command in a Kickstart file, the installation could not proceed because RAID names are not available during initial stages of the installation. This update improves RAID support by ensuring that Anaconda also checks devices in /dev/md/ for a matching name. For example, if the Kickstart file contains the command ignoredisk --only-use=myraid , Anaconda will now also attempt to find an array located at /dev/md/myraid . This allows the installer to locate RAID arrays specified by name at any point during the installation, and enables specifying only RAID array names in Kickstart files. (BZ#1327439) Kickstart no longer accepts passwords that are too short Previously, when using a Kickstart file to install Red Hat Enterprise Linux 7, the Anaconda installer immediately accepted passwords shorter than the minimal length defined by the --minlen Kickstart option, if the password was sufficiently strong (quality value 50 or above by default). This bug has been fixed, and the --minlen option now works even with strong passwords. (BZ#1356975) Initial Setup now correctly opens in a graphical interface over SSH on IBM z Systems Previously, when connecting to an IBM z Systems machine using SSH, the text version of the Initial Setup interface opened even if X forwarding was enabled. This bug has been fixed, and the graphical version of Initial Setup now opens correctly when using X forwarding. (BZ#1378082) Extra time is no longer needed for installation when geolocation services are enabled When installing Red Hat Enterprise Linux 7.3 with limited or no internet access, the installer previously paused for several minutes in the Installation Summary screen with the Security Policy section being Not ready . This was caused by the geolocation service being unable to determine the system's location. Consequently, the installation could not proceed before the service timed out. With this update, the geolocation service correctly times out if it can not find the location within 3 seconds, and the installation can proceed almost immediately even with limited or no network connection. (BZ#1380224) The ifup-aliases script now sends gratuitous ARP updates when adding new IP addresses When moving one or more IP aliases from one server to another, associated IP addresses may be unreachable for some time, depending on the Address Resolution Protocol (ARP) time-out value that is configured in the upstream router. This bug has been addressed in the initscripts package, and ifup-aliases now updates other systems on the network significantly faster in this situation. (BZ# 1367554 ) The netconsole utility now launches correctly Previously, if nameserver address lines were not present in the /etc/resolv.conf file, launching netconsole sometimes resulted in an error and netconsole did not start. The initscripts package has been updated, and netconsole now starts correctly in this situation. (BZ# 1278521 ) rc.debug kernel allows easier debugging of initscripts This enhancement introduces the rc.debug option for the kernel command line. Adding the rc.debug option to the kernel command line prior to booting produces a log of all the activity of the initscripts files during the boot and termination processes. The log appears as part of the /var/log/dmesg log file. As a result, adding the rc.debug option to the kernel command line enables easier debugging of initscripts if needed. (BZ# 1394191 ) The system no longer fails to terminate with /usr on iSCSI or NFS In versions of Red Hat Enterprise Linux 7, the termination of the system sometimes failed and the system remained hung if the /usr folder was mounted over a network (for example, NFS or iSCSI ). This issue has been resolved, and the system should now shut down normally. (BZ# 1369790 , BZ# 1446171 ) rhel-autorelabel no longer corrupts the filesystem In versions of Red Hat Enterprise Linux 7, forcing the SELinux autorelabel by creating the /.autorelabel file sometimes partially corrupted the filesystem. This made the system unbootable. A patch has been applied to prevent this behaviour. As a result, applying the autorelabel operation using the touch /.autorelabel command is no more expected to corrupt the filesystem. (BZ# 1385272 ) The rpmbuild command now correctly processes Perl requires Previously, a bug in rpm caused my variable = << blocks to be treated as code instead of string constants when building packages using the rpmbuild command. This caused rpm to add unintended dependencies to packages being built in cases where the variable contained the word use followed by another word. With this update, rpm correctly skips these blocks when searching for dependencies, and packages no longer contain unintended dependencies. (BZ#1378307) Installer now correctly recognizes BIOS RAID devices when using ignoredisk in Kickstart Previously, some BIOS RAID devices were not correctly recognized during installation when using a Kickstart file with the ignoredisk --onlyuse=<bios raid name> command. This caused the installation to fail and report lack of free space because the device could not be used. With this update, Anaconda recognizes BIOS RAID devices reliably when they are specified in a Kickstart file, and installations no longer fail in these circumstances. (BZ#1327463) Single quotes now work for values in the ifcfg-* files Previously, it was only possible to specify values by using double quotes in the ifcfg-* files. Using single quotes did not work. With this update, single quotes work too, for example: (BZ#1428574) rhel-import-state no longer changes access permissions for /dev/shm/ , allowing the system to boot correctly Previously, problems during the boot-up process occurred due to the introduction of a new script in a dracut update. The new script changed the access permission to the /dev/shm/ directory when the dracut utility placed the directory to the /run/initramfs/state/ . With this update, rhel-import-state no longer changes the access permissions for /dev/shm/ , and the system starts correctly. (BZ# 1406254 ) Backward compatibility enabled for Red Hat Enterprise Linux 6 initscripts The initscripts files in Red Hat Enterprise Linux 7 have been patched to enable backward compatibility and to prevent possible regressions when doing an upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. (BZ# 1392766 ) initscripts now specifies /etc/rwtab and /etc/statetab as configuration files Previously, a reinstallation of the initscripts package replaced the /etc/rwtab and /etc/statetab files. If these files had contained user's configuration, the reinstallation process overwrote it. The initscripts package has been updated to specify the /etc/rwtab and /etc/statetab files as configuration files. If these files are modified by the user, performing the reinstallation now creates the *.rpmnew files containing the new configuration in the /etc/ folder. As a result of this update, a reinstallation of the initscripts package leaves the /etc/rwtab and /etc/statetab files intact. (BZ#1434075) The ifup script no longer slows down NetworkManager Previously, the ifup script was very slow when notifying NetworkManager . This particularly affected Red Hat Virtualization (RHV) network startup times. A patch has been applied to initscripts, and the described problem no longer occurs. (BZ# 1408219 ) Gnome Initial Setup can now be disabled by the firstboot --disable command in kickstart With this update, the gnome-initial-setup package has been fixed to respect the firstboot --disable kickstart command. As a result, Gnome Initial Setup can be robustly turned off during a kickstart installation and users are no longer forced to create user account on the first boot under the described circumstances as long as the installation kickstart contains the firstboot --disable command. (BZ# 1226819 ) Setting NM_CONTROLLED now works correctly across all the ifcfg-* files When the NM_CONTROLLED=no parameter was set for an interface in its ifcfg-* file, other interfaces in some cases inherited this configuration. This behaviour prevented the NetworkManager daemon from controlling these interfaces. The issue has now been resolved, and setting the NM_CONTROLLED parameter now works correctly across all the ifcfg-* files. As a result, the user can choose which interface is controlled by NetworkManager , and which is not. (BZ# 1374837 ) The dhclient command no longer incorrectly uses localhost when hostname is not set The dhclient command incorrectly sent localhost to the DHCP server as the host name when the hostname variable was not set. This has been fixed, and dhclient no longer sends an incorrect host name in these situations. (BZ#1398686) The initscripts utility now handles LVM2 correctly Previously, later versions of the initscripts utility made use of a new --ignoreskippedcluster option for the vgchange command during boot. This option was missing in earlier versions of the lvm2 utilities. As a consequence, systems using earlier versions of the Logical Volume Manager device mapper (LVM2) could fail to boot correctly. With this update, the initscripts RPM indicates the version of lvm2 required, and if a sufficient version is installed, systems with LVM2 boot correctly. (BZ#1398683) The service network stop command no longer attempts to stop services which are already stopped Previously, when a tunnel interface was present, the service network stop command incorrectly attempted to stop services which had been stopped already, displaying an error message. This bug has been fixed, and the service network stop command now stops only running services. (BZ#1398679) ifdown on a loopback device now works correctly In versions of Red Hat Enterprise Linux 7, executing the ifdown command on a local loopback device failed to remove the device. A patch has been applied, and the removal of an existing loopback device using ifdown now succeeds. (BZ#1398678) Scripts in initscripts handle static IPv6 address assignment more robustly Previously, scripts in the initscripts package sometimes failed to correctly assign static IPv6 addresses if a Router Advertisement (RA) was received during system initialization. This bug has been fixed, and now the statically assigned address is correctly applied in the described situation. (BZ#1398671) Deselecting an add-on option in Software Selection no longer requires a double-click When installing Red Hat Enterprise Linux 7.3, the user had to double-click in order to deselect an add-on checkbox after a Base environment change. The bug occurred in the Software Selection dialogue of the graphical installation. With this update, the system no longer requires double-clicking when deselecting an option after a Base environment change. A single click is sufficient. (BZ# 1404158 ) The target system hostname can be configured via installer boot options in Kickstart installations In Red Hat Enterprise Linux 7.3, the hostname specified via the Anaconda installer boot options during a Kickstart installation was previously incorrectly not set for installed system and the default localhost.localdomain hostname value was used instead. With this update, Anaconda has been fixed to apply the hostname set by the boot option to the target system configuration. As a result, users can now configure the target system hostname via the installer boot options also for Kickstart installations. (BZ#1441337) Anaconda no longer asks for Installation Source verification after network configuration Previously, during an Anaconda installation from a repository, when the user changed network settings after repository packages had already been selected, the Installation Source required verification. This request was made even when the repository was still reachable after the network change, resulting in an unnecessary step. With this update, the Anaconda installer keeps the original source repository and verifies whether it is still reachable after the Network & Hostname configuration. As a result, the user is only required to reconfigure the Installation Source if the original repository is not reachable. (BZ#1358778) Disks using the OEMDRV label are now correctly ignored during automatic installation The OEMDRV disk label is used on driver update disks during installation. Due to a bug, disks with this label were being used by Anaconda as installation targets during automatic installations, which meant they were being erased and used as part of the installed system storage. This update ensures that Anaconda ignores disks with this label unless they are explicitly selected as installation targets, and the problem no longer occurs. (BZ# 1412022 ) | [
"ONBOOT='yes'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_installation_and_booting |
24.3. Viewing CPU Usage | 24.3. Viewing CPU Usage 24.3.1. Using the System Monitor Tool The Resources tab of the System Monitor tool allows you to view the current CPU usage on the system. To start the System Monitor tool, either select Applications System Tools System Monitor from the panel, or type gnome-system-monitor at a shell prompt. Then click the Resources tab to view the system's CPU usage. Figure 24.3. System Monitor - Resources In the CPU History section, the System Monitor tool displays a graphical representation of the CPU usage history and shows the percentage of how much CPU is currently in use. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-sysinfo-cpu |
7.135. netcf | 7.135. netcf 7.135.1. RHBA-2015:1307 - netcf bug fix update Updated netcf packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The netcf packages contain a library for modifying the network configuration of a system. Network configuration is expressed in a platform-independent XML format, which netcf translates into changes to the system's "native" network configuration files. Bug Fixes BZ# 1113978 Previously, when the XML configuration for an interface enabled dynamic host configuration protocol (DHCP) for IPv6, the netcf library erroneously set the variable named "DHCPV6" in the ifcfg configuration file instead of "DHCPV6C". The underlying source code has been patched, and netcf now passes the correct "DHCPV6C" option to ifcfg. BZ# 1116314 Prior to this update, when requested to configure an interface with an IPv4 netmask of 255.255.255.255, the netcf library logged an error as the interface configuration was rejected. This update fixes the netmask for the 32-bit interface prefix, and netcf now configures IPv4 interfaces successfully. BZ# 1208897 Due to a parsing error, the ifcfg files with comments starting anywhere beyond column 1 or multiple variables on a single line caused the netcf library to generate errors when attempting to list host interfaces. The parsing error has been fixed, and any tool using netcf now lists active interfaces as expected. BZ# 1208894 When multiple static IPv6 addresses were specified in an interface configuration, an extra set of quotes appeared in the IPV6ADDR_SECONDARIES entry in the generated configuration file. This update removes extraneous single quotes from IPV6ADDR_SECONDARIES, thus fixing this bug. BZ# 1165966 Due to a denial of a service flaw in the netcf library, a specially crafted interface name previously caused applications using netcf, such as the libvirt daemon, to terminate unexpectedly. An upstream patch has been applied to fix this bug, and applications using netcf no longer crash in the aforementioned situation. Users of netcf are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-netcf |
2.9. Package Selection | 2.9. Package Selection Figure 2.14. Package Selection The Package Selection window allows you to choose which package groups to install. There are also options available to resolve and ignore package dependencies automatically. Currently, Kickstart Configurator does not allow you to select individual packages. To install individual packages, modify the %packages section of the kickstart file after you save it. Refer to Section 1.5, "Package Selection" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/RHKSTOOL-Package_Selection |
Chapter 26. Configuring KIE Server to send information to ElasticSearch when a transaction is committed | Chapter 26. Configuring KIE Server to send information to ElasticSearch when a transaction is committed You can configure KIE Server to send information to ElasticSearch automatically. In this case, KIE Server writes an ElasticSearch index entry every time a task, process, case, or variable is created, updated, or deleted. The index entry contains information about the modified object. KIE Server writes the index entry when it commits the transaction with the change. You can use this functionality with any business process or case. You do not need to change anything in the process design. This configuration is also available if you run your process service using Spring Boot. KIE Server serializes the process, case, and task information as JSON documents. It uses the following ElasticSearch indexes: processes for process information cases for case information tasks for task information Prerequisites You created a business process or a case. For more information about creating a business process or case, see Developing process services in Red Hat Process Automation Manager . Procedure To enable sending information to ElasticSearch, complete one of the following steps: If you deployed KIE Server on Red Hat JBoss EAP or another application server, complete the following steps: Download the rhpam-7.13.5-maven-repository.zip product deliverable file from the Software Downloads page of the Red Hat Customer Portal. Extract the contents of the file. Copy the maven-repository/org/jbpm/jbpm-event-emitters-elasticsearch/7.67.0.Final-redhat-00024/jbpm-event-emitters-elasticsearch-7.67.0.Final-redhat-00024.jar file into the deployments/kie-server.war/WEB-INF/lib subdirectory of the application server. If you deployed the application using Spring Boot, add the following lines to the <dependencies> list in the pom.xml file of your service: <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-event-emitters-elasticsearch</artifactId> <version>USD{version.org.kie}</version> </dependency> Configure any of the following KIE Server system properties as necessary: org.jbpm.event.emitters.elasticsearch.url : The URL of the ElasticSearch server. The default value is http://localhost:9200 . org.jbpm.event.emitters.elasticsearch.date_format : The timestamp format for the information. The default value is yyyy-MM-dd'T'HH:mm:ss.SSSZ . org.jbpm.event.emitters.elasticsearch.user : The user name for authenticating to the ElasticSearch server. org.jbpm.event.emitters.elasticsearch.password : The password for authenticating the user to the ElasticSearch server. org.jbpm.event.emitters.elasticsearch.ignoreNull : If this property is true , null values are not written into the JSON output for ElasticSearch. 26.1. Customizing data for ElasticSearch You can develop transformer classes to customize the data that Red Hat Process Automation Manager sends to ElasticSearch. Information about processes, cases, tasks, and task operations is available as views . Red Hat Process Automation Manager includes the following view types: CaseInstanceView ProcessInstanceView TaskInstanceView TaskOperationView You can see the definitions of these views in the GitHub repository . Each view has a getCompositeId() method that returns an identifier. This identifier denotes a particular instance of a case, process, task, or task operation. Each time a process, case, task, or task operation is created, updated, or deleted, the process engine calls a transformer and supplies the relevant view. The transformer must generate an ESRequest object. In the parameters of the constructor of this object, the transformer must supply the necessary information for the ElasticSearch request, including the index. The definitions of the transformer classes and the ESRequest class are available in the GitHub repository . To create and use custom transformers, complete the following procedure. Procedure Create the Java source code for the following classes: ESInstanceViewTransformer : The transformer class. It provides index() and update() methods. Both of the methods take a view as a parameter and return an ESRequest object. When a process, case, task, or task operation instance is first created, the process engine calls the index() method. For subsequent changes related to the same instance, the process engine calls the update() method. You can create different ESInstanceViewTransformer implementations for different view types. ESInstanceViewTransformerFactory : The transformer factory class. It returns an instance of the ESInstanceViewTransformer class for every view type. In Business Central, enter your project and click the Settings Dependencies tab. Optional: Add any dependencies that your transformer classes require. Click the Assets tab. For each of the class source files, complete the following steps: Click Import Asset . In the Please select a file to upload field, select the location of the Java source file for the custom serializer class. Click Ok to upload the file. For the KIE Server instance that runs the service, set the org.jbpm.event.emitters.elasticsearch.factory system property to the fully qualified class name of your implementation of ESInstanceViewTransformerFactory . | [
"<dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-event-emitters-elasticsearch</artifactId> <version>USD{version.org.kie}</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/integration-elasticsearch-proc_integrating-amq-streams |
Chapter 3. Managing IdM certificates using Ansible | Chapter 3. Managing IdM certificates using Ansible You can use the ansible-freeipa ipacert module to request, revoke, and retrieve SSL certificates for Identity Management (IdM) users, hosts and services. You can also restore a certificate that has been put on hold. 3.1. Using Ansible to request SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to request SSL certificates for Identity Management (IdM) users, hosts and services. They can then use these certificates to authenticate to IdM. Complete this procedure to request a certificate for an HTTP server from an IdM certificate authority (CA) using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. Procedure Generate a certificate-signing request (CSR) for your user, host or service. For example, to use the openssl utility to generate a CSR for the HTTP service running on client.idm.example.com, enter: As a result, the CSR is stored in new.csr . Create your Ansible playbook file request-certificate.yml with the following content: Replace the certificate request with the CSR from new.csr . Request the certificate: Additional resources The cert module in ansible-freeipa upstream docs 3.2. Using Ansible to revoke SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to revoke SSL certificates used by Identity Management (IdM) users, hosts and services to authenticate to IdM. Complete this procedure to revoke a certificate for an HTTP server using an Ansible playbook. The reason for revoking the certificate is "keyCompromise". Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789. Your IdM deployment has an integrated CA. Procedure Create your Ansible playbook file revoke-certificate.yml with the following content: Revoke the certificate: Additional resources The cert module in ansible-freeipa upstream docs Reason Code in RFC 5280 3.3. Using Ansible to restore SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to restore a revoked SSL certificate previously used by an Identity Management (IdM) user, host or a service to authenticate to IdM. Note You can only restore a certificate that was put on hold. You may have put it on hold because, for example, you were not sure if the private key had been lost. However, now you have recovered the key and as you are certain that no-one has accessed it in the meantime, you want to reinstate the certificate. Complete this procedure to use an Ansible playbook to release a certificate for a service enrolled into IdM from hold. This example describes how to release a certificate for an HTTP service from hold. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in path/to/certificate command. In this example, the certificate serial number is 123456789 . Procedure Create your Ansible playbook file restore-certificate.yml with the following content: Run the playbook: Additional resources The cert module in ansible-freeipa upstream docs 3.4. Using Ansible to retrieve SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to retrieve an SSL certificate issued for an Identity Management (IdM) user, host or a service, and store it in a file on the managed node. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789, and the file in which you store the retrieved certificate is cert.pem . Procedure Create your Ansible playbook file retrieve-certificate.yml with the following content: Retrieve the certificate: Additional resources The cert module in ansible-freeipa upstream docs | [
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN=client.idm.example.com,O=IDM.EXAMPLE.COM'",
"--- - name: Playbook to request a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Request a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" state: requested csr: | -----BEGIN CERTIFICATE REQUEST----- MIGYMEwCAQAwGTEXMBUGA1UEAwwOZnJlZWlwYSBydWxlcyEwKjAFBgMrZXADIQBs HlqIr4b/XNK+K8QLJKIzfvuNK0buBhLz3LAzY7QDEqAAMAUGAytlcANBAF4oSCbA 5aIPukCidnZJdr491G4LBE+URecYXsPknwYb+V+ONnf5ycZHyaFv+jkUBFGFeDgU SYaXm/gF8cDYjQI= -----END CERTIFICATE REQUEST----- principal: HTTP/client.idm.example.com register: cert",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/request-certificate.yml",
"--- - name: Playbook to revoke a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Revoke a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 revocation_reason: \"keyCompromise\" state: revoked",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/revoke-certificate.yml",
"--- - name: Playbook to restore a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Restore a certificate for a web service ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 state: released",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/restore-certificate.yml",
"--- - name: Playbook to retrieve a certificate and store it locally on the managed node hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Retrieve a certificate and save it to file 'cert.pem' ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 certificate_out: cert.pem state: retrieved",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/retrieve-certificate.yml"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/managing-idm-certificates-using-ansible_managing-certificates-in-idm |
4.4. aide | 4.4. aide 4.4.1. RHBA-2012:0512 - aide bug fix update Updated aide packages that fix one bug are now available for Red Hat Enterprise Linux 6. Advanced Intrusion Detection Environment (AIDE) is a program that creates a database of files on a system, and then uses that database to ensure file integrity and detect system intrusions. Bug Fix BZ# 811936 Previously, the aide utility incorrectly initialized the gcrypt library. This consequently prevented aide to initialize its database if the system was running in FIPS-compliant mode. The initialization routine has been corrected, and along with an extension to the libgcrypt's API introduced in the RHEA-2012:0486 advisory, aide now initializes its database as expected if run in a FIPS-compliant way. All users of aide are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/aide |
Chapter 6. Enabling alert data retention | Chapter 6. Enabling alert data retention Learn how to configure a retention period for Red Hat Advanced Cluster Security for Kubernetes alerts. With Red Hat Advanced Cluster Security for Kubernetes, you can configure the time to keep historical alerts stored. Red Hat Advanced Cluster Security for Kubernetes then deletes the older alerts after the specified time. By automatically deleting alerts that are no longer needed, you can save storage costs. The alerts for which you can configure the retention period include: Runtime alerts, both unresolved (active) and resolved. Stale deploy-time alerts that do not apply to the current deployment. Note Data retention settings are enabled by default. You can change these settings after the installation. When you upgrade Red Hat Advanced Cluster Security for Kubernetes, data retention settings are not applied unless you have enabled them before. You can configure alert retention settings by using the RHACS portal or the API. The deletion process runs every hour. Currently, you cannot change this. 6.1. Configuring alert data retention You can configure alert retention settings by using the RHACS portal. Prerequisites You must have the Administration role with read and write permissions to configure data retention. Procedure In the RHACS portal, go to Platform Configuration System Configuration . On the System Configuration view header, click Edit . Under the Data Retention Configuration section, update the number of days for each type of data: All Runtime Violations Resolved Deploy-Phase Violations Runtime Violations For Deleted Deployments Images No Longer Deployed Note To save a type of data forever, set the retention period to 0 days. Click Save . Note To configure alert data retention by using Red Hat Advanced Cluster Security for Kubernetes API, view the PutConfig API and related APIs in the ConfigService group in the API reference documentation. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/enable-alert-data-retention |
24.7.3. Related Books | 24.7.3. Related Books Apache: The Definitive Guide by Ben Laurie and Peter Laurie; O'Reilly & Associates, Inc. Reference Guide ; Red Hat, Inc - This companion manual includes instructions for migrating from Apache HTTP Server version 1.3 to Apache HTTP Server version 2.0 manually, more details about the Apache HTTP Server directives, and instructions for adding modules to the Apache HTTP Server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/HTTPD_Configuration_Additional_Resources-Related_Books |
Chapter 2. Certification prerequisites for Red Hat OpenStack Platform Application | Chapter 2. Certification prerequisites for Red Hat OpenStack Platform Application Companies must be Partners in Red Hat Connect for Technology Partners . This program enables an ecosystem for commercial OpenStack deployments and includes numerous technology companies. You must have a support relationship with Red Hat. This can be fulfilled through the multi-vendor support network of TSANet, or through a custom support agreement. You must have a good working knowledge of Red Hat OpenStack Platform (RHOSP) including installation and configuration of the product. You must have a tested application on a supported RHOSP release. Note The RHOSP application certification does not verify if your application's intended behavior matches the application's actual behavior. This responsibility remains under your full control. Additional resources For more information about the product, see detailed product documentation on Red Hat Customer Portal To undertake the product training or certification, see Red Hat Training Page | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_policy_guide/con-rhosp-application-certification-prerequisites_rhosp-vnf-pol-overview-introduction |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/proc_providing-feedback-on-red-hat-documentation_server-migration-tool |
4.24. IPMI over LAN | 4.24. IPMI over LAN The fence agents for IPMI over LAN ( fence_ipmilan ,) Dell iDRAC ( fence_idrac ), IBM Integrated Management Module ( fence_imm ), HP iLO3 devices ( fence_ilo3 ), and HP iLO4 devices ( fence_ilo4 ) share the same implementation. Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" lists the fence device parameters used by these agents. Table 4.25. IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4 luci Field cluster.conf Attribute Description Name name A name for the fence device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name of a user capable of issuing power on/off commands to the given port. Password passwd The password used to authenticate the connection to the port. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Authentication Type auth Authentication type: none , password , or MD5 . Use Lanplus lanplus True or 1 . If blank, then value is False . It is recommended that you enable Lanplus to improve the security of your connection if your hardware supports it. Ciphersuite to use cipher The remote server authentication, integrity, and encryption algorithms to use for IPMIv2 lanplus connections. Privilege level privlvl The privilege level on the device. IPMI Operation Timeout timeout Timeout in seconds for IPMI operation. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. The default value is 2 seconds for fence_ipmilan , fence_idrac , fence_imm , and fence_ilo4 . The default value is 4 seconds for fence_ilo3 . Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Method to Fence method The method to fence: on/off or cycle Figure 4.19, "IPMI over LAN" shows the configuration screen for adding an IPMI over LAN device Figure 4.19. IPMI over LAN The following command creates a fence device instance for an IPMI over LAN device: The following is the cluster.conf entry for the fence_ipmilan device: | [
"ccs -f cluster.conf --addfencedev ipmitest1 agent=fence_ipmilan auth=password cipher=3 ipaddr=192.168.0.1 lanplus=on login=root passwd=password123",
"<fencedevices> <fencedevice agent=\"fence_ipmilan\" auth=\"password\" cipher=\"3\" ipaddr=\"192.168.0.1\" lanplus=\"on\" login=\"root\" name=\"ipmitest1\" passwd=\"password123\"/> </fencedevices>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-ipmi-CA |
4.2. abrt and libreport | 4.2. abrt and libreport 4.2.1. RHBA-2011:1598 - abrt and libreport bug fix and enhancement update Updated abrt and libreport packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The abrt packages contain the Automatic Bug Reporting Tool (ABRT) version 2. In comparison with ABRT version 1, this version provides more flexible configuration, which covers a variety of customer use cases that the version was unable to cover. It also moves a lot of data processing from the daemon to separate tools that run without root privileges, which makes the daemon less error prone and the whole processing more secure. Note: This update obsoletes the former report tool and replaces the report library to unify the reporting process in all Red Hat applications (Anaconda, setroubleshoot, ABRT). The most interesting feature for end-users is the problem solution searching: when ABRT is configured to report to the Red Hat Customer Portal, it tries to search Red Hat problem databases (such as Knowledge Base or Bugzilla) for possible solutions and refers the user to these resources if the solution is found. Bug Fixes BZ# 610603 The abrt-gui application used to list plug-ins multiple times if they were configured in the configuration file. This is now fixed. BZ# 627621 In the version of ABRT, a daemon restart was required for any changes in the configuration to take effect. In the new version, most of the options in the configuration file no longer require a restart. BZ# 653872 Support for retrace server has been added. Refer to https://fedorahosted.org/abrt/wiki/AbrtRetraceServer for more information about this new feature. BZ# 671354 By default, ABRT stores all problem information in the /var/spool/abrt/ directory. Previously, this path was hard coded and could not be changed in the configuration. With this update, this path can be changed in the /etc/abrt/abrt.conf configuration file. BZ# 671359 The documentation failed to cover some customer use cases. This error has been fixed, and all of these use cases are now covered in the Red Hat Enterprise Linux 6 Deployment Guide. BZ# 673173 In ABRT version 1, it was not possible to use wildcards to specify that some action should happen for any user. ABRT version 2 adds support for this functionality. BZ# 695416 The lacking information about configuring a proxy has been added to the Red Hat Enterprise Linux 6 Deployment Guide. BZ# 707950 Previously, a bug in ABRT version 1 was preventing a local Python build to finish. This is now fixed. BZ# 725660 The report tool and report library have been obsoleted by abrt and libreport. Users can notice the change in the problem reporting user interface of Anaconda, setroubleshoot, and ABRT. All users of ABRT are advised to upgrade to these updated packages, which provide numerous bug fixes and enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/abrt_and_libreport |
Chapter 76. Crypto (Java Cryptographic Extension) DataFormat | Chapter 76. Crypto (Java Cryptographic Extension) DataFormat Available as of Camel version 2.3 The Crypto Data Format integrates the Java Cryptographic Extension into Camel, allowing simple and flexible encryption and decryption of messages using Camel's familiar marshall and unmarshal formatting mechanism. It assumes marshalling to mean encryption to cyphertext and unmarshalling to mean decryption back to the original plaintext. This data format implements only symmetric (shared-key) encryption and decyption. 76.1. CryptoDataFormat Options The Crypto (Java Cryptographic Extension) dataformat supports 10 options, which are listed below. Name Default Java Type Description algorithm DES/CBC/PKCS5Padding String The JCE algorithm name indicating the cryptographic algorithm that will be used. Is by default DES/CBC/PKCS5Padding. cryptoProvider String The name of the JCE Security Provider that should be used. keyRef String Refers to the secret key to lookup from the register to use. initVectorRef String Refers to a byte array containing the Initialization Vector that will be used to initialize the Cipher. algorithmParameterRef String A JCE AlgorithmParameterSpec used to initialize the Cipher. Will lookup the type using the given name as a java.security.spec.AlgorithmParameterSpec type. buffersize Integer The size of the buffer used in the signature process. macAlgorithm HmacSHA1 String The JCE algorithm name indicating the Message Authentication algorithm. shouldAppendHMAC false Boolean Flag indicating that a Message Authentication Code should be calculated and appended to the encrypted data. inline false Boolean Flag indicating that the configured IV should be inlined into the encrypted data stream. Is by default false. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 76.2. Spring Boot Auto-Configuration The component supports 33 options, which are listed below. Name Description Default Type camel.component.crypto.configuration.algorithm Sets the JCE name of the Algorithm that should be used for the signer. SHA1WithDSA String camel.component.crypto.configuration.alias Sets the alias used to query the KeyStore for keys and link java.security.cert.Certificate Certificates to be used in signing and verifying exchanges. This value can be provided at runtime via the message header org.apache.camel.component.crypto.DigitalSignatureConstants #KEYSTORE_ALIAS String camel.component.crypto.configuration.buffer-size Set the size of the buffer used to read in the Exchange payload data. 2048 Integer camel.component.crypto.configuration.certificate Set the Certificate that should be used to verify the signature in the exchange based on its payload. Certificate camel.component.crypto.configuration.certificate-name Sets the reference name for a PrivateKey that can be fond in the registry. String camel.component.crypto.configuration.clear-headers Determines if the Signature specific headers be cleared after signing and verification. Defaults to true, and should only be made otherwise at your extreme peril as vital private information such as Keys and passwords may escape if unset. true Boolean camel.component.crypto.configuration.crypto-operation Set the Crypto operation from that supplied after the crypto scheme in the endpoint uri e.g. crypto:sign sets sign as the operation. CryptoOperation camel.component.crypto.configuration.key-store-parameters Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges based on the given KeyStoreParameters. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStoreParameters camel.component.crypto.configuration.keystore Sets the KeyStore that can contain keys and Certficates for use in signing and verifying exchanges. A KeyStore is typically used with an alias, either one supplied in the Route definition or dynamically via the message header CamelSignatureKeyStoreAlias. If no alias is supplied and there is only a single entry in the Keystore, then this single entry will be used. KeyStore camel.component.crypto.configuration.keystore-name Sets the reference name for a Keystore that can be fond in the registry. String camel.component.crypto.configuration.name The logical name of this operation. String camel.component.crypto.configuration.password Sets the password used to access an aliased PrivateKey in the KeyStore. Character[] camel.component.crypto.configuration.private-key Set the PrivateKey that should be used to sign the exchange PrivateKey camel.component.crypto.configuration.private-key-name Sets the reference name for a PrivateKey that can be fond in the registry. String camel.component.crypto.configuration.provider Set the id of the security provider that provides the configured Signature algorithm. String camel.component.crypto.configuration.public-key Set the PublicKey that should be used to verify the signature in the exchange. PublicKey camel.component.crypto.configuration.public-key-name references that should be resolved when the context changes String camel.component.crypto.configuration.secure-random Set the SecureRandom used to initialize the Signature service SecureRandom camel.component.crypto.configuration.secure-random-name Sets the reference name for a SecureRandom that can be fond in the registry. String camel.component.crypto.configuration.signature-header-name Set the name of the message header that should be used to store the base64 encoded signature. This defaults to 'CamelDigitalSignature' String camel.component.crypto.enabled Enable crypto component true Boolean camel.component.crypto.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.dataformat.crypto.algorithm The JCE algorithm name indicating the cryptographic algorithm that will be used. Is by default DES/CBC/PKCS5Padding. DES/CBC/PKCS5Padding String camel.dataformat.crypto.algorithm-parameter-ref A JCE AlgorithmParameterSpec used to initialize the Cipher. Will lookup the type using the given name as a java.security.spec.AlgorithmParameterSpec type. String camel.dataformat.crypto.buffersize The size of the buffer used in the signature process. Integer camel.dataformat.crypto.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.crypto.crypto-provider The name of the JCE Security Provider that should be used. String camel.dataformat.crypto.enabled Enable crypto dataformat true Boolean camel.dataformat.crypto.init-vector-ref Refers to a byte array containing the Initialization Vector that will be used to initialize the Cipher. String camel.dataformat.crypto.inline Flag indicating that the configured IV should be inlined into the encrypted data stream. Is by default false. false Boolean camel.dataformat.crypto.key-ref Refers to the secret key to lookup from the register to use. String camel.dataformat.crypto.mac-algorithm The JCE algorithm name indicating the Message Authentication algorithm. HmacSHA1 String camel.dataformat.crypto.should-append-h-m-a-c Flag indicating that a Message Authentication Code should be calculated and appended to the encrypted data. false Boolean ND 76.3. Basic Usage At its most basic all that is required to encrypt/decrypt an exchange is a shared secret key. If one or more instances of the Crypto data format are configured with this key the format can be used to encrypt the payload in one route (or part of one) and decrypted in another. For example, using the Java DSL as follows: KeyGenerator generator = KeyGenerator.getInstance("DES"); CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES", generator.generateKey()); from("direct:basic-encryption") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(cryptoFormat) .to("mock:unencrypted"); In Spring the dataformat is configured first and then used in routes <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <dataFormats> <crypto id="basic" algorithm="DES" keyRef="desKey" /> </dataFormats> ... <route> <from uri="direct:basic-encryption" /> <marshal ref="basic" /> <to uri="mock:encrypted" /> <unmarshal ref="basic" /> <to uri="mock:unencrypted" /> </route> </camelContext> 76.4. Specifying the Encryption Algorithm Changing the algorithm is a matter of supplying the JCE algorithm name. If you change the algorithm you will need to use a compatible key. KeyGenerator generator = KeyGenerator.getInstance("DES"); CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); cryptoFormat.setMacAlgorithm("HmacMD5"); from("direct:hmac-algorithm") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(cryptoFormat) .to("mock:unencrypted"); A list of the available algorithms in Java 7 is available via the Java Cryptography Architecture Standard Algorithm Name Documentation. 76.5. Specifying an Initialization Vector Some crypto algorithms, particularly block algorithms, require configuration with an initial block of data known as an Initialization Vector. In the JCE this is passed as an AlgorithmParameterSpec when the Cipher is initialized. To use such a vector with the CryptoDataFormat you can configure it with a byte[] containing the required data e.g. KeyGenerator generator = KeyGenerator.getInstance("DES"); byte[] initializationVector = new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07}; CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES/CBC/PKCS5Padding", generator.generateKey()); cryptoFormat.setInitializationVector(initializationVector); from("direct:init-vector") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(cryptoFormat) .to("mock:unencrypted"); or with spring, suppling a reference to a byte[] <crypto id="initvector" algorithm="DES/CBC/PKCS5Padding" keyRef="desKey" initVectorRef="initializationVector" /> The same vector is required in both the encryption and decryption phases. As it is not necessary to keep the IV a secret, the DataFormat allows for it to be inlined into the encrypted data and subsequently read out in the decryption phase to initialize the Cipher. To inline the IV set the /oinline flag. KeyGenerator generator = KeyGenerator.getInstance("DES"); byte[] initializationVector = new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07}; SecretKey key = generator.generateKey(); CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES/CBC/PKCS5Padding", key); cryptoFormat.setInitializationVector(initializationVector); cryptoFormat.setShouldInlineInitializationVector(true); CryptoDataFormat decryptFormat = new CryptoDataFormat("DES/CBC/PKCS5Padding", key); decryptFormat.setShouldInlineInitializationVector(true); from("direct:inline") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(decryptFormat) .to("mock:unencrypted"); or with spring. <crypto id="inline" algorithm="DES/CBC/PKCS5Padding" keyRef="desKey" initVectorRef="initializationVector" inline="true" /> <crypto id="inline-decrypt" algorithm="DES/CBC/PKCS5Padding" keyRef="desKey" inline="true" /> For more information of the use of Initialization Vectors, consult http://en.wikipedia.org/wiki/Initialization_vector http://www.herongyang.com/Cryptography/ http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation 76.6. Hashed Message Authentication Codes (HMAC) To avoid attacks against the encrypted data while it is in transit the CryptoDataFormat can also calculate a Message Authentication Code for the encrypted exchange contents based on a configurable MAC algorithm. The calculated HMAC is appended to the stream after encryption. It is separated from the stream in the decryption phase. The MAC is recalculated and verified against the transmitted version to insure nothing was tampered with in transit.For more information on Message Authentication Codes see http://en.wikipedia.org/wiki/HMAC KeyGenerator generator = KeyGenerator.getInstance("DES"); CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); from("direct:hmac") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(cryptoFormat) .to("mock:unencrypted"); or with spring. <crypto id="hmac" algorithm="DES" keyRef="desKey" shouldAppendHMAC="true" /> By default the HMAC is calculated using the HmacSHA1 mac algorithm though this can be easily changed by supplying a different algorithm name. See here for how to check what algorithms are available through the configured security providers KeyGenerator generator = KeyGenerator.getInstance("DES"); CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); cryptoFormat.setMacAlgorithm("HmacMD5"); from("direct:hmac-algorithm") .marshal(cryptoFormat) .to("mock:encrypted") .unmarshal(cryptoFormat) .to("mock:unencrypted"); or with spring. <crypto id="hmac-algorithm" algorithm="DES" keyRef="desKey" macAlgorithm="HmacMD5" shouldAppendHMAC="true" /> 76.7. Supplying Keys Dynamically When using a Recipient list or similar EIP the recipient of an exchange can vary dynamically. Using the same key across all recipients may neither be feasible or desirable. It would be useful to be able to specify keys dynamically on a per exchange basis. The exchange could then be dynamically enriched with the key of its target recipient before being processed by the data format. To facilitate this the DataFormat allow for keys to be supplied dynamically via the message headers below CryptoDataFormat.KEY "CamelCryptoKey" CryptoDataFormat cryptoFormat = new CryptoDataFormat("DES", null); /** * Note: the header containing the key should be cleared after * marshalling to stop it from leaking by accident and * potentially being compromised. The processor version below is * arguably better as the key is left in the header when you use * the DSL leaks the fact that camel encryption was used. */ from("direct:key-in-header-encrypt") .marshal(cryptoFormat) .removeHeader(CryptoDataFormat.KEY) .to("mock:encrypted"); from("direct:key-in-header-decrypt").unmarshal(cryptoFormat).process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().getHeaders().remove(CryptoDataFormat.KEY); exchange.getOut().copyFrom(exchange.getIn()); } }).to("mock:unencrypted"); or with spring. <crypto id="nokey" algorithm="DES" /> 76.8. Dependencies To use the Crypto dataformat in your camel routes you need to add the following dependency to your pom. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-crypto</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 76.9. See Also Data Format Crypto (Digital Signatures) http://www.bouncycastle.org/java.html | [
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES\", generator.generateKey()); from(\"direct:basic-encryption\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(cryptoFormat) .to(\"mock:unencrypted\");",
"<camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <dataFormats> <crypto id=\"basic\" algorithm=\"DES\" keyRef=\"desKey\" /> </dataFormats> <route> <from uri=\"direct:basic-encryption\" /> <marshal ref=\"basic\" /> <to uri=\"mock:encrypted\" /> <unmarshal ref=\"basic\" /> <to uri=\"mock:unencrypted\" /> </route> </camelContext>",
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES\", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); cryptoFormat.setMacAlgorithm(\"HmacMD5\"); from(\"direct:hmac-algorithm\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(cryptoFormat) .to(\"mock:unencrypted\");",
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); byte[] initializationVector = new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07}; CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES/CBC/PKCS5Padding\", generator.generateKey()); cryptoFormat.setInitializationVector(initializationVector); from(\"direct:init-vector\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(cryptoFormat) .to(\"mock:unencrypted\");",
"<crypto id=\"initvector\" algorithm=\"DES/CBC/PKCS5Padding\" keyRef=\"desKey\" initVectorRef=\"initializationVector\" />",
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); byte[] initializationVector = new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07}; SecretKey key = generator.generateKey(); CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES/CBC/PKCS5Padding\", key); cryptoFormat.setInitializationVector(initializationVector); cryptoFormat.setShouldInlineInitializationVector(true); CryptoDataFormat decryptFormat = new CryptoDataFormat(\"DES/CBC/PKCS5Padding\", key); decryptFormat.setShouldInlineInitializationVector(true); from(\"direct:inline\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(decryptFormat) .to(\"mock:unencrypted\");",
"<crypto id=\"inline\" algorithm=\"DES/CBC/PKCS5Padding\" keyRef=\"desKey\" initVectorRef=\"initializationVector\" inline=\"true\" /> <crypto id=\"inline-decrypt\" algorithm=\"DES/CBC/PKCS5Padding\" keyRef=\"desKey\" inline=\"true\" />",
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES\", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); from(\"direct:hmac\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(cryptoFormat) .to(\"mock:unencrypted\");",
"<crypto id=\"hmac\" algorithm=\"DES\" keyRef=\"desKey\" shouldAppendHMAC=\"true\" />",
"KeyGenerator generator = KeyGenerator.getInstance(\"DES\"); CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES\", generator.generateKey()); cryptoFormat.setShouldAppendHMAC(true); cryptoFormat.setMacAlgorithm(\"HmacMD5\"); from(\"direct:hmac-algorithm\") .marshal(cryptoFormat) .to(\"mock:encrypted\") .unmarshal(cryptoFormat) .to(\"mock:unencrypted\");",
"<crypto id=\"hmac-algorithm\" algorithm=\"DES\" keyRef=\"desKey\" macAlgorithm=\"HmacMD5\" shouldAppendHMAC=\"true\" />",
"CryptoDataFormat cryptoFormat = new CryptoDataFormat(\"DES\", null); /** * Note: the header containing the key should be cleared after * marshalling to stop it from leaking by accident and * potentially being compromised. The processor version below is * arguably better as the key is left in the header when you use * the DSL leaks the fact that camel encryption was used. */ from(\"direct:key-in-header-encrypt\") .marshal(cryptoFormat) .removeHeader(CryptoDataFormat.KEY) .to(\"mock:encrypted\"); from(\"direct:key-in-header-decrypt\").unmarshal(cryptoFormat).process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().getHeaders().remove(CryptoDataFormat.KEY); exchange.getOut().copyFrom(exchange.getIn()); } }).to(\"mock:unencrypted\");",
"<crypto id=\"nokey\" algorithm=\"DES\" />",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-crypto</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/crypto-dataformat |
Chapter 5. ClusterOperator [config.openshift.io/v1] | Chapter 5. ClusterOperator [config.openshift.io/v1] Description ClusterOperator is the Custom Resource object which holds the current state of an operator. This object is used by operators to convey their state to the rest of the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds configuration that could apply to any operator. status object status holds the information about the state of an operator. It is consistent with status information across the Kubernetes ecosystem. 5.1.1. .spec Description spec holds configuration that could apply to any operator. Type object 5.1.2. .status Description status holds the information about the state of an operator. It is consistent with status information across the Kubernetes ecosystem. Type object Property Type Description conditions array conditions describes the state of the operator's managed and monitored components. conditions[] object ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. extension `` extension contains any additional status information specific to the operator which owns this status object. relatedObjects array relatedObjects is a list of objects that are "interesting" or related to this operator. Common uses are: 1. the detailed resource driving the operator 2. operator namespaces 3. operand namespaces relatedObjects[] object ObjectReference contains enough information to let you inspect or modify the referred object. versions array versions is a slice of operator and operand version tuples. Operators which manage multiple operands will have multiple operand entries in the array. Available operators must report the version of the operator itself with the name "operator". An operator reports a new "operator" version when it has rolled out the new version to all of its operands. versions[] object 5.1.3. .status.conditions Description conditions describes the state of the operator's managed and monitored components. Type array 5.1.4. .status.conditions[] Description ClusterOperatorStatusCondition represents the state of the operator's managed and monitored components. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. It may contain Line Feed characters (U+000A), which should be rendered as new lines. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 5.1.5. .status.relatedObjects Description relatedObjects is a list of objects that are "interesting" or related to this operator. Common uses are: 1. the detailed resource driving the operator 2. operator namespaces 3. operand namespaces Type array 5.1.6. .status.relatedObjects[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Required group name resource Property Type Description group string group of the referent. name string name of the referent. namespace string namespace of the referent. resource string resource of the referent. 5.1.7. .status.versions Description versions is a slice of operator and operand version tuples. Operators which manage multiple operands will have multiple operand entries in the array. Available operators must report the version of the operator itself with the name "operator". An operator reports a new "operator" version when it has rolled out the new version to all of its operands. Type array 5.1.8. .status.versions[] Description Type object Required name version Property Type Description name string name is the name of the particular operand this version is for. It usually matches container images, not operators. version string version indicates which version of a particular operand is currently being managed. It must always match the Available operand. If 1.0.0 is Available, then this must indicate 1.0.0 even if the operator is trying to rollout 1.1.0 5.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/clusteroperators DELETE : delete collection of ClusterOperator GET : list objects of kind ClusterOperator POST : create a ClusterOperator /apis/config.openshift.io/v1/clusteroperators/{name} DELETE : delete a ClusterOperator GET : read the specified ClusterOperator PATCH : partially update the specified ClusterOperator PUT : replace the specified ClusterOperator /apis/config.openshift.io/v1/clusteroperators/{name}/status GET : read status of the specified ClusterOperator PATCH : partially update status of the specified ClusterOperator PUT : replace status of the specified ClusterOperator 5.2.1. /apis/config.openshift.io/v1/clusteroperators Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterOperator Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterOperator Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ClusterOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterOperator Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body ClusterOperator schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 202 - Accepted ClusterOperator schema 401 - Unauthorized Empty 5.2.2. /apis/config.openshift.io/v1/clusteroperators/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the ClusterOperator Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterOperator Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterOperator Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterOperator Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterOperator Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body ClusterOperator schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 401 - Unauthorized Empty 5.2.3. /apis/config.openshift.io/v1/clusteroperators/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the ClusterOperator Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterOperator Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterOperator Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterOperator Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body ClusterOperator schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK ClusterOperator schema 201 - Created ClusterOperator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/clusteroperator-config-openshift-io-v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.