title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 3. Installing Data Grid Operator
Chapter 3. Installing Data Grid Operator Install Data Grid Operator into a OpenShift namespace to create and manage Data Grid clusters. 3.1. Installing Data Grid Operator on Red Hat OpenShift Create subscriptions to Data Grid Operator on OpenShift so you can install different Data Grid versions and receive automatic updates. Automatic updates apply to Data Grid Operator first and then for each Data Grid node. Data Grid Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the node. Prerequisites Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials. Have an OpenShift project for Data Grid Operator if you plan to install it into a specific namespace. Procedure Log in to the OpenShift Web Console. Navigate to OperatorHub . Find and select Data Grid Operator. Select Install and continue to Create Operator Subscription . Specify options for your subscription. Installation Mode You can install Data Grid Operator into a Specific namespace or All namespaces. Update Channel Get updates for Data Grid Operator 8.4.x. Approval Strategies Automatically install updates from the 8.4.x channel or require approval before installation. Select Subscribe to install Data Grid Operator. Navigate to Installed Operators to verify the Data Grid Operator installation. 3.2. Installing Data Grid Operator with the native CLI plugin Install Data Grid Operator with the native Data Grid CLI plugin, kubectl-infinispan . Prerequisites Have kubectl-infinispan on your PATH . Procedure Run the oc infinispan install command to create Data Grid Operator subscriptions, for example: Verify the installation. Tip Use oc infinispan install --help for command options and descriptions. 3.3. Installing Data Grid Operator with an OpenShift client You can use the oc client to create Data Grid Operator subscriptions as an alternative to installing through the OperatorHub or with the native Data Grid CLI. Prerequisites Have an oc client. Procedure Set up projects. Create a project for Data Grid Operator. If you want Data Grid Operator to control a specific Data Grid cluster only, create a project for that cluster. 1 Creates a project into which you install Data Grid Operator. 2 Optionally creates a project for a specific Data Grid cluster if you do not want Data Grid Operator to watch all projects. Create an OperatorGroup resource. Control all Data Grid clusters Control a specific Data Grid cluster Create a subscription for Data Grid Operator. Note If you want to manually approve updates from the 8.4.x channel, change the value of the spec.installPlanApproval field to Manual . Verify the installation.
[ "infinispan install --channel=8.4.x --source=redhat-operators --source-namespace=openshift-marketplace", "get pods -n openshift-operators | grep infinispan-operator NAME READY STATUS infinispan-operator-<id> 1/1 Running", "new-project USD{INSTALL_NAMESPACE} 1 new-project USD{WATCH_NAMESPACE} 2", "apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} EOF", "apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} spec: targetNamespaces: - USD{WATCH_NAMESPACE} EOF", "apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: datagrid-operator namespace: USD{INSTALL_NAMESPACE} spec: channel: 8.4.x installPlanApproval: Automatic name: datagrid source: redhat-operators sourceNamespace: openshift-marketplace EOF", "get pods -n USD{INSTALL_NAMESPACE} NAME READY STATUS infinispan-operator-<id> 1/1 Running" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/installation
Installing on a single node
Installing on a single node OpenShift Container Platform 4.15 Installing OpenShift Container Platform on a single node Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_a_single_node/index
Chapter 4. Configuring the instrumentation
Chapter 4. Configuring the instrumentation The Red Hat build of OpenTelemetry Operator uses an Instrumentation custom resource that defines the configuration of the instrumentation. 4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. Auto-instrumentation runs as follows: The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. 4.2. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.2.1. Instrumentation options Instrumentation options are specified in an Instrumentation custom resource (CR). Sample Instrumentation CR apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values env Common environment variables to define across all the instrumentations. exporter Exporter configuration. propagators Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none resource Resource attributes configuration. sampler Sampling configuration. apacheHttpd Configuration for the Apache HTTP Server instrumentation. dotnet Configuration for the .NET instrumentation. go Configuration for the Go instrumentation. java Configuration for the Java instrumentation. nodejs Configuration for the Node.js instrumentation. python Configuration for the Python instrumentation. Table 4.2. Default protocol for auto-instrumentation Auto-instrumentation Default protocol Java 1.x otlp/grpc Java 2.x otlp/http Python otlp/http .NET otlp/http Go otlp/http Apache HTTP Server otlp/grpc 4.2.2. Configuration of the OpenTelemetry SDK variables You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod: OTEL_SERVICE_NAME OTEL_TRACES_SAMPLER OTEL_TRACES_SAMPLER_ARG OTEL_PROPAGATORS OTEL_RESOURCE_ATTRIBUTES OTEL_EXPORTER_OTLP_ENDPOINT OTEL_EXPORTER_OTLP_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_KEY Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation Value Description "true" Injects the Instrumentation resource with the default name from the current namespace. "false" Injects no Instrumentation resource. "<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from the current namespace. "<namespace>/<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from another namespace. 4.2.3. Exporter configuration Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector. Sample exporter TLS CA configuration using a config map apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system. Sample exporter mTLS configuration using a Secret apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the Secret for the ca_file , cert_file , and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 4 Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 5 Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system. Note You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret. Example configuration for CA bundle injection by using a config map and Instrumentation CR apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: "true" # ... --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt # ... 4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Important The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.4. Parameters for the .spec.apacheHttpd field Name Description Default attrs Attributes specific to the Apache HTTP Server. configPath Location of the Apache HTTP Server configuration. /usr/local/apache2/conf env Environment variables specific to the Apache HTTP Server. image Container image with the Apache SDK and auto-instrumentation. resourceRequirements The compute resource requirements. version Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.2.5. Configuration of the .NET auto-instrumentation Important The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to .NET. image Container image with the .NET SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.2.6. Configuration of the Go auto-instrumentation Important The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Go. image Container image with the Go SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.2.7. Configuration of the Java auto-instrumentation Important The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Java. image Container image with the Java SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.2.8. Configuration of the Node.js auto-instrumentation Important The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Node.js. image Container image with the Node.js SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.2.9. Configuration of the Python auto-instrumentation Important The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Python. image Container image with the Python SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.2.10. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. 4.2.11. Multi-container pods with multiple instrumentations Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation: instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1 1 You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table. Table 4.5. Supported values for the <application_language> Language Value for <application_language> ApacheHTTPD apache DotNet dotnet Java java NGINX inject-nginx NodeJS nodejs Python python SDK sdk 4.2.12. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator.
[ "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3", "apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5", "apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt", "instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"", "instrumentation.opentelemetry.io/inject-dotnet: \"true\"", "instrumentation.opentelemetry.io/inject-go: \"true\"", "apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny", "oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>", "instrumentation.opentelemetry.io/inject-java: \"true\"", "instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"", "instrumentation.opentelemetry.io/inject-python: \"true\"", "instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"", "instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/red_hat_build_of_opentelemetry/otel-configuration-of-instrumentation
8.79. java-1.6.0-openjdk
8.79. java-1.6.0-openjdk 8.79.1. RHBA-2013:1741 - java-1.6.0-openjdk bug fix and enhancement update Updated java-1.6.0-openjdk packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The java-1.6.0-openjdk packages provide the OpenJDK 6 Java Runtime Environment and the OpenJDK 6 Java Software Development Kit. Note The java-1.6.0-openjdk packages have been upgraded to upstream IcedTea version 1.13.0, which provides a number of bug fixes and enhancements over the version. (BZ# 983411 ) Bug Fix BZ# 976897 Previously, int[] objects allocated by instances of the com.sun.imageio.plugins.jpeg.JPEGImageWriter class were consuming extensive amounts of memory, which was consequently not released. With this update, the underlying stream processing logic has been modified to ensure correct releasing of such memory, and extensive memory consumption no longer occurs. Users of java-1.6.0-openjdk are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. All running instances of OpenJDK Java must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/java-1.6.0-openjdk
7.289. xorg-x11-xkb-utils
7.289. xorg-x11-xkb-utils 7.289.1. RHBA-2013:0305 - xorg-x11-xkb-utils bug fix and enhancement update Updated xorg-x11-xkb-utils packages that fix several bugs and add various enhancements are now available. The x11-xkb-utils packages provide a set of client-side utilities for XKB, the X11 keyboard extension. Note The x11-xkb-utils packages have been upgraded to upstream version 7.7, which provides a number of bug fixes and enhancements over the version. (BZ# 835282 , BZ# 872057 ) All users of x11-xkb-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/xorg-x11-xkb-utils
Chapter 6. Installing and preparing the Operators
Chapter 6. Installing and preparing the Operators You install the Red Hat OpenStack Services on OpenShift (RHOSO) OpenStack Operator ( openstack-operator ) and create the RHOSO control plane on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. You install the OpenStack Operator by using the RHOCP web console. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster. 6.1. Prerequisites An operational RHOCP cluster, version 4.16. For the RHOCP system requirements, see Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . The oc command line tool is installed on your workstation. You are logged in to the RHOCP cluster as a user with cluster-admin privileges. 6.2. Installing the OpenStack Operator You use OperatorHub on the Red Hat OpenShift Container Platform (RHOCP) web console to install the OpenStack Operator ( openstack-operator ) on your RHOCP cluster. Procedure Log in to the RHOCP web console as a user with cluster-admin permissions. Select Operators OperatorHub . In the Filter by keyword field, type OpenStack . Click the OpenStack Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page, select "Operator recommended Namespace: openstack-operators" from the Installed Namespace list. Click Install to make the Operator available to the openstack-operators namespace. The Operators are deployed and ready when the Status of the OpenStack Operator is Succeeded .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/assembly_installing-and-preparing-the-operators
Chapter 6. Network Configuration
Chapter 6. Network Configuration This chapter provides an introduction to the common networking configurations used by libvirt-based guest virtual machines. Red Hat Enterprise Linux 7 supports the following networking setups for virtualization: virtual networks using Network Address Translation ( NAT ) directly allocated physical devices using PCI device assignment directly allocated virtual functions using PCIe SR-IOV bridged networks You must enable NAT, network bridging or directly assign a PCI device to allow external hosts access to network services on guest virtual machines. 6.1. Network Address Translation (NAT) with libvirt One of the most common methods for sharing network connections is to use Network Address Translation (NAT) forwarding (also known as virtual networks). Host Configuration Every standard libvirt installation provides NAT-based connectivity to virtual machines as the default virtual network. Verify that it is available with the virsh net-list --all command. If it is missing, the following can be used in the XML configuration file (such as /etc/libvirtd/qemu/myguest.xml) for the guest: The default network is defined from /etc/libvirt/qemu/networks/default.xml Mark the default network to automatically start: Start the default network: Once the libvirt default network is running, you will see an isolated bridge device. This device does not have any physical interfaces added. The new device uses NAT and IP forwarding to connect to the physical network. Do not add new interfaces. libvirt adds iptables rules which allow traffic to and from guest virtual machines attached to the virbr0 device in the INPUT , FORWARD , OUTPUT and POSTROUTING chains. libvirt then attempts to enable the ip_forward parameter. Some other applications may disable ip_forward , so the best option is to add the following to /etc/sysctl.conf . Guest Virtual Machine Configuration Once the host configuration is complete, a guest virtual machine can be connected to the virtual network based on its name. To connect a guest to the 'default' virtual network, the following can be used in the XML configuration file (such as /etc/libvirtd/qemu/myguest.xml ) for the guest: Note Defining a MAC address is optional. If you do not define one, a MAC address is automatically generated and used as the MAC address of the bridge device used by the network. Manually setting the MAC address may be useful to maintain consistency or easy reference throughout your environment, or to avoid the very small chance of a conflict.
[ "virsh net-list --all Name State Autostart ----------------------------------------- default active yes", "ll /etc/libvirt/qemu/ total 12 drwx------. 3 root root 4096 Nov 7 23:02 networks -rw-------. 1 root root 2205 Nov 20 01:20 r6.4.xml -rw-------. 1 root root 2208 Nov 8 03:19 r6.xml", "virsh net-autostart default Network default marked as autostarted", "virsh net-start default Network default started", "brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes", "net.ipv4.ip_forward = 1", "<interface type='network'> <source network='default'/> </interface>", "<interface type='network'> <source network='default'/> <mac address='00:16:3e:1a:b3:4a'/> </interface>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Network_configuration
3.0 Release Notes
3.0 Release Notes Red Hat Software Collections 3.0 Release Notes for Red Hat Software Collections 3.0 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services [email protected] Eliska Slobodova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.0_release_notes/index
Chapter 24. Storage
Chapter 24. Storage e2fsprogs component The e4defrag utility in the e2fsprogs package is not supported in Red Hat Enterprise Linux 7.0, and is scheduled to be removed in Red Hat Enterprise Linux 7.1. xfsprogs component If XFS metadata checksums are enabled specifying the -m crc=1 option to the mkfs.xfs command, the kernel prints the following warning message when the file system is mounted: Note that this CRC functionality is available for testing in Red Hat Enterprise Linux 7.0 and is planned to be fully supported in Red Hat Enterprise Linux 7.1. cryptsetup component, BZ#883941 When the systemd daemon parses the crypttab file in case of a non-LUKS device for swap with a random key, it uses the ripemd160 hash by default, which is not allowed in FIPS mode. To work around this problem, add the hash= setting with an algorithm approved for FIPS to the particular crypttab line. For example: swap /dev/sda7 /dev/urandom swap,hash=sha1 . lvm2 component, BZ# 1083633 If the kernel thin provisioning code detects a device failure or runs out of metadata space, it sets a flag on the device to indicate that it needs to be checked. Currently, LVM tools do not perform this check automatically. To work around this problem, execute the thin_check --clear-needs-check-flag command to perform the check and remove the flag. Then run the thin_repair command if necessary. Alternatively, you can add --clear-needs-check-flag to thin_check_options in the global section of the /etc/lvm.conf configuration file to run the check automatically. device-mapper-multipath component, BZ#1066264 In Red Hat Enterprise Linux 7, the behavior of the kpartx utility has been changed, so it no longer adds the letter p as a delimiter between a device name and a partition suffix unless the device name ends with a digit. When a device is renamed using the multipath utility, the device name is simply replaced by a new name while the suffix stays unchanged, regardless of the delimiter. Consequently, the kpartx behavior is not followed, and changing device names ending in a digit to names ending in a letter, or the other way round, works incorrectly when using multipath to rename devices. To work around this problem, choose one of the following three options: Remove the multipathd daemon and add the devices again; Remove the devices manually by using the kpartx -d command and then add them by running the partx -a command; Rename the devices by using the kpartx -p p command for device names that are supposed to contain the delimiter and they do not, and the kpartx -p "" command in cases when the delimiter is used redundantly. snapper component, BZ# 1071973 The empty-pre-post cleanup algorithm, which is used for deleting pre-post pairs of file system snapshots with empty diffs, does not work in Red Hat Enterprise Linux 7. To work around this problem, remove empty pre-post snapshot couples manually by using the delete command. kernel component, BZ#1084859 The bigalloc feature for the ext4 file systems, which enables ext4 to use clustered allocation, is not supported in Red Hat Enterprise Linux 7. lvm2 component, BZ#1083835 Placing the /boot partition on an LVM volume is not supported. Placing the /boot partition on a Btrfs subvolume is not supported either. nfs-utils component, BZ#1082746 While the rpc.svcgssd binary is included in the nfs-utils package in Red Hat Enterprise Linux 7.0, its use in new deployments is discouraged in favor of gssproxy . The rpc.svcgssd binary may be removed in later Red Hat Enterprise Linux 7 releases. kernel component, BZ#1061871, BZ#1201247 When a storage array returns a CHECK CONDITION status but the sense data is invalid, the Small Computer Systems Interface (SCSI) mid-layer code retries the I/O operation. If subsequent I/O operations receive the same result, I/O operations are retried indefinitely. For this bug, no workaround is currently available.
[ "Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled! Use of these features in this kernel is at your own risk!" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/known-issues-storage
Chapter 7. Configure Network Bonding
Chapter 7. Configure Network Bonding Red Hat Enterprise Linux 7 allows administrators to bind multiple network interfaces together into a single, bonded, channel. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information. Note The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and see https://www.kernel.org/doc/Documentation/networking/bonding.txt 7.1. Understanding the Default Behavior of Controller and Port Interfaces When controlling bonded port interfaces using the NetworkManager daemon, and especially when fault finding, keep the following in mind: Starting the controller interface does not automatically start the port interfaces. Starting a port interface always starts the controller interface. Stopping the controller interface also stops the port interfaces. A controller without ports can start static IP connections. A controller without ports waits for ports when starting DHCP connections. A controller with a DHCP connection waiting for ports completes when a port with a carrier is added. A controller with a DHCP connection waiting for ports continues waiting when a port without a carrier is added.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-configure_network_bonding
Part I. Viewing and managing your subscription inventory
Part I. Viewing and managing your subscription inventory The Subscription Inventory page on the Red Hat Hybrid Cloud Console provides information and resources to help you manage the account-level subscriptions in your inventory. Specifically, you can view details and status information about each subscription, learn more about subscriptions and how to manage them, and explore purchasing and no-cost trial options. Subscription-level details (name, SKU, quantity, and service level) about each subscription in your inventory are presented in the All subscriptions table. These details, plus the account support type and subscription capacity, are also presented in a list on the subscription details page. The account number is displayed in the table title. The subscription status types (Active, Expired, Expiring soon, and Future dated) and the number of subscriptions in your inventory with each status are displayed in a set of tiles on the Subscription inventory page. There are multiple ways to view and organize the subscription information in the All subscriptions table. You can use the table to sort the subscriptions alphabetically or numerically by name, SKU, quantity, or service level. You can use the tiles to filter your subscriptions by status (active, expired, expiring soon, or future dated). You can use the search bar to filter your subscriptions by name or SKU. Authorized users can use the Subscription Inventory page to manage your subscriptions and interact with your Red Hat account team, as needed, to maintain adequate subscription quantities and types for your account.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/viewing_and_managing_your_subscription_inventory_on_the_hybrid_cloud_console/assembly-viewing-managing-sub-inventory
Removing OpenShift Serverless
Removing OpenShift Serverless Red Hat OpenShift Serverless 1.35 Removing Serverless from your cluster Red Hat OpenShift Documentation Team
[ "oc delete knativeeventings.operator.knative.dev knative-eventing -n knative-eventing", "oc delete namespace knative-eventing", "oc delete knativeservings.operator.knative.dev knative-serving -n knative-serving", "oc delete namespace knative-serving", "oc get crd -oname | grep 'knative.dev' | xargs oc delete" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html-single/removing_openshift_serverless/index
Chapter 9. Server Message Block (SMB)
Chapter 9. Server Message Block (SMB) The Server Message Block (SMB) protocol implements an application-layer network protocol used to access resources on a server, such as file shares and shared printers. On Microsoft Windows, SMB is implemented by default. If you run Red Hat Enterprise Linux, use Samba to provide SMB shares and the cifs-utils utility to mount SMB shares from a remote server. Note In the context of SMB, you sometimes read about the Common Internet File System (CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are supported and the kernel module and utilities involved in mounting SMB and CIFS shares both use the name cifs . 9.1. Providing SMB Shares See the Samba section in the Red Hat System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-server_message_block-smb
Chapter 4. Configuring persistent storage
Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports AWS Elastic Block Store volumes (EBS). You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . Some familiarity with Kubernetes and AWS is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. AWS Elastic Block Store volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High-availability of storage in the infrastructure is left to the underlying storage provider. 4.1.1. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. 4.1.2. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/aws-ebs from the drop-down list. Note To create the storage class with the equivalent CSI driver, select ebs.csi.aws.com from the drop-down list. For more details, see AWS Elastic Block Store CSI Driver Operator . Enter additional parameters for the storage class as desired. Click Create to create the storage class. 4.1.3. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.4. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.5. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes. 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume OpenShift Container Platform supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plug-in, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plug-in path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/gce-pd from the drop-down list. Enter additional parameters for the storage class as desired. Click Create to create the storage class. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the storage class created previously from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This determines the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.8.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.8.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV). Create a file, pv.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 Used to bind persistent volume claim requests to this persistent volume. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system. It is safe to mount the host by using /host . Create the PV from the file: USD oc create -f pv.yaml Define the persistent volume claim (PVC). Create a file, pvc.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.8.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.9. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.9.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.9.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.9.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.9.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.9.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.9.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.10. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.10.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate project openshift-local-storage openshift.io/node-selector='' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Run the following command to get the OpenShift Container Platform major and minor version. It is required for the channel value in the step. USD OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | \ grep -o '[0-9]*[.][0-9]*' | head -1) Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1alpha2 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "USD{OC_VERSION}" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.10.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The file system that is created when the local volume is mounted for the first time. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note A raw block volume ( volumeMode: block ) is not formatted with a file system. You should use this mode only if any application running on the pod can use raw block devices. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "localblock-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The path containing a list of local storage devices to choose from. 6 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/local-disks-local-provisioner-h97hj 1/1 Running 0 46m pod/local-disks-local-provisioner-j4mnn 1/1 Running 0 46m pod/local-disks-local-provisioner-kbdnx 1/1 Running 0 46m pod/local-disks-local-diskmaker-ldldw 1/1 Running 0 46m pod/local-disks-local-diskmaker-lvrv4 1/1 Running 0 46m pod/local-disks-local-diskmaker-phxdq 1/1 Running 0 46m pod/local-storage-operator-54564d9988-vxvhx 1/1 Running 0 47m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator ClusterIP 172.30.49.90 <none> 60000/TCP 47m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/local-disks-local-provisioner 3 3 3 3 3 <none> 46m daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3 <none> 46m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-54564d9988 1 1 1 47m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.10.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h 4.10.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.10.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.10.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: In the Administrator perspective, navigate to Operators Installed Operators and click on the Local Volume Discovery tab. Click Create Local Volume Discovery . Select either All nodes or Select nodes , depending on whether you want to discover available disks on all or specific nodes. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Click Create . A local volume discovery instance named auto-discover-devices is displayed. To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.10.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "localblock-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.10.8. Deleting the Local Storage Operator resources 4.10.8.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete any symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. Create a debug pod on the node: USD oc debug node/<node-name> Change your root directory to the host: USD chroot /host Navigate to the directory containing the local volume symlinks. USD cd /mnt/openshift-local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. Delete the symlink belonging to the removed device. USD rm <symlink> 4.10.8.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. Once these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.11. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Network File System (NFS) 4.11.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plug-in. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.11.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.11.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plug-in directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plug-in mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.11.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.11.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.11.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.11.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plug-in interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.11.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.12. Red Hat OpenShift Container Storage Red Hat OpenShift Container Storage is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Container Storage is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. Red Hat OpenShift Container Storage provides its own documentation library. The complete set of Red Hat OpenShift Container Storage documentation identified below is available at https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.7/ Important OpenShift Container Storage on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Container Storage Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Container Storage information about... See the following Red Hat OpenShift Container Storage documentation: Planning What's new, known issues, notable bug fixes, and Technology Previews Red Hat OpenShift Container Storage 4.7 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your Red Hat OpenShift Container Storage 4.7 deployment Deploying Deploying Red Hat OpenShift Container Storage using Amazon Web Services for local or cloud storage Deploying OpenShift Container Storage 4.7 using Amazon Web Services Deploying Red Hat OpenShift Container Storage to local storage on bare metal infrastructure Deploying OpenShift Container Storage 4.7 using bare metal infrastructure Deploying Red Hat OpenShift Container Storage to use an external Red Hat Ceph Storage cluster Deploying OpenShift Container Storage 4.7 in external mode Deploying and managing Red Hat OpenShift Container Storage on existing Google Cloud clusters Deploying and managing OpenShift Container Storage 4.7 using Google Cloud Deploying Red Hat OpenShift Container Storage to use local storage on IBM Z infrastructure Deploying OpenShift Container Storage using IBM Z Deploying Red Hat OpenShift Container Storage on IBM Power Systems Deploying OpenShift Container Storage using IBM Power Systems Deploying Red Hat OpenShift Container Storage on IBM Cloud Deploying OpenShift Container Storage using IBM Cloud Deploying and managing Red Hat OpenShift Container Storage on Red Hat OpenStack Platform (RHOSP) Deploying and managing OpenShift Container Storage 4.7 using Red Hat OpenStack Platform Deploying and managing Red Hat OpenShift Container Storage on Red Hat Virtualization (RHV) Deploying and managing OpenShift Container Storage 4.7 using Red Hat Virtualization Platform Deploying Red Hat OpenShift Container Storage on VMware vSphere clusters Deploying OpenShift Container Storage 4.7 on VMware vSphere Updating Red Hat OpenShift Container Storage to the latest version Updating OpenShift Container Storage Managing Allocating storage to core services and hosted applications in Red Hat OpenShift Container Storage, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Container Storage Replacing devices Safely replacing a node in a Red Hat OpenShift Container Storage cluster Replacing nodes Scaling operations in Red Hat OpenShift Container Storage Scaling storage Monitoring a Red Hat OpenShift Container Storage 4.7 cluster Monitoring OpenShift Container Storage 4.7 Troubleshooting errors and issues Troubleshooting OpenShift Container Storage 4.7 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration Toolkit for Containers 4.13. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Additional resources VMware vSphere 4.13.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.13.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.13.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.13.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.13.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.13.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs.
[ "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "oc adm new-project openshift-local-storage", "oc annotate project openshift-local-storage openshift.io/node-selector=''", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "apiVersion: operators.coreos.com/v1alpha2 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/local-disks-local-provisioner-h97hj 1/1 Running 0 46m pod/local-disks-local-provisioner-j4mnn 1/1 Running 0 46m pod/local-disks-local-provisioner-kbdnx 1/1 Running 0 46m pod/local-disks-local-diskmaker-ldldw 1/1 Running 0 46m pod/local-disks-local-diskmaker-lvrv4 1/1 Running 0 46m pod/local-disks-local-diskmaker-phxdq 1/1 Running 0 46m pod/local-storage-operator-54564d9988-vxvhx 1/1 Running 0 47m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator ClusterIP 172.30.49.90 <none> 60000/TCP 47m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/local-disks-local-provisioner 3 3 3 3 3 <none> 46m daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3 <none> 46m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 47m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-54564d9988 1 1 1 47m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name>", "chroot /host", "cd /mnt/openshift-local-storage/<sc-name> 1", "rm <symlink>", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/storage/configuring-persistent-storage
2.2.8.4. Disable Sendmail Network Listening
2.2.8.4. Disable Sendmail Network Listening By default, Sendmail is set up to only listen to the local loopback address. You can verify this by viewing the file /etc/mail/sendmail.mc to ensure that the following line appears: This ensures that Sendmail only accepts mail messages (such as cron job reports) from the local system and not from the network. This is the default setting and protects Sendmail from a network attack. For removal of the localhost restriction, the Addr=127.0.0.1 string needs to be removed. Changing Sendmail's configuration requires installing the sendmail-cf package, then editing the .mc file, running /etc/mail/make and finally restarting sendmail . The .cf configuration file will be regenerated. Note that the system clock must be correct and working and that there must not be any system clock time shifts between these actions in order for the configuration file to be automatically regenerated.
[ "DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-disable_sendmail_network_listening
3.3. Converting a virtual machine
3.3. Converting a virtual machine Once you have prepared to convert the virtual machines, use virt-v2v to perform the actual conversions. This section provides the steps to convert the virtual machines, and the command syntax for virt-v2v . Note that conversions are resource intensive processes that require copying the whole disk image for a virtual machine. In typical environments, converting a single virtual machine takes approximately 5-10 minutes. In Example 3.4, "Typical virt-v2v conversion time" a virtual machine with a single 8GB disk is copied over SSH on a 1GigE network on three-year-old consumer hardware: Example 3.4. Typical virt-v2v conversion time The size of the disk to be copied is the major factor in determining conversion time. For a virtual machine on average hardware with a single disk of 20GB or less, a conversion usually takes less than 10 minutes. 3.3.1. Converting a local virtual machine using virt-v2v virt-v2v converts virtual machines from a foreign hypervisor to run on KVM, managed by libvirt. The general command syntax for converting machines to run on KVM, managed by libvirt is: For a list of virt-v2v parameters, refer to Chapter 7, References .
[ "win2k3r2-pv-32.img: 100% [===========================================]D 0h02m57s virt-v2v: win2k3r2-pv-32 configured with virtio drivers.", "virt-v2v -i libvirtxml -op pool --bridge bridge_name guest_name.xml virt-v2v -op pool --network netname guest_name virt-v2v -ic esx://esx.example.com/?no_verify=1 -op pool --bridge bridge_name guest_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-converting-a-virtual-machine
Chapter 1. Enhancements
Chapter 1. Enhancements ENTMQIC-2455 - Allow AMQP open properties to be supplemented from connector configuration With this release you can now add metadata when connecting routers, as described in Adding metadata to connections . ENTMQIC-2448 - Allow defining address prefix shared by different multitenant listeners With this release a multitenant prefix specified by a vhost policy can be shared among several hostnames with a pattern specified in the aliases parameter as described in Creating vhost policies .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_interconnect_1.9/enhancements
14.9.5. Forcing a Guest Virtual Machine to Stop
14.9.5. Forcing a Guest Virtual Machine to Stop Force a guest virtual machine to stop with the virsh destroy command: This command does an immediate ungraceful shutdown and stops the specified guest virtual machine. Using virsh destroy can corrupt guest virtual machine file systems. Use the destroy option only when the guest virtual machine is unresponsive. If you want to initiate a graceful shutdown, use the virsh destroy --graceful command.
[ "virsh destroy {domain-id, domain-name or domain-uuid} [--graceful]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-forcing_a_guest_virtual_machine_to_stop
A.3. Capturing Trace Data on a Constant Basis Using the Systemtap Flight Recorder
A.3. Capturing Trace Data on a Constant Basis Using the Systemtap Flight Recorder You can capture QEMU trace data all the time using a systemtap initscript provided in the qemu-kvm package. This package uses SystemTap's flight recorder mode to trace all running guest virtual machines and to save the results to a fixed-size buffer on the host. Old trace entries are overwritten by new entries when the buffer is filled. Procedure A.1. Configuring and running systemtap Install the package Install the systemtap-initscript package by running the following command: Copy the configuration file Copy the systemtap scripts and the configuration files to the systemtap directory by running the following commands: The set of trace events to enable is given in qemu_kvm.stp. This SystemTap script can be customized to add or remove trace events provided in /usr/share/systemtap/tapset/qemu-kvm-simpletrace.stp . SystemTap customizations can be made to qemu_kvm.conf to control the flight recorder buffer size and whether to store traces in memory only or in the disk as well. Start the service Start the systemtap service by running the following command: Make systemtap enabled to run at boot time Enable the systemtap service to run at boot time by running the following command: Confirmation the service is running Confirm that the service is working by running the following command: Procedure A.2. Inspecting the trace buffer Create a trace buffer dump file Create a trace buffer dump file called trace.log and place it in the tmp directory by running the following command: You can change the file name and location to something else. Start the service As the step stops the service, start it again by running the following command: Convert the trace contents into a readable format To convert the trace file contents into a more readable format, enter the following command: Note The following notes and limitations should be noted: The systemtap service is disabled by default. There is a small performance penalty when this service is enabled, but it depends on which events are enabled in total. There is a README file located in /usr/share/doc/qemu-kvm-*/README.systemtap .
[ "yum install systemtap-initscript", "cp /usr/share/qemu-kvm/systemtap/script.d/qemu_kvm.stp /etc/systemtap/script.d/ cp /usr/share/qemu-kvm/systemtap/conf.d/qemu_kvm.conf /etc/systemtap/conf.d/", "systemctl start systemtap qemu_kvm", "systemctl enable systemtap qemu_kvm", "systemctl status systemtap qemu_kvm qemu_kvm is running", "staprun -A qemu_kvm >/tmp/trace.log", "systemctl start systemtap qemu_kvm", "/usr/share/qemu-kvm/simpletrace.py --no-header /usr/share/qemu-kvm/trace-events /tmp/trace.log" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-systemtaptrace
Manage Secrets with OpenStack Key Manager
Manage Secrets with OpenStack Key Manager Red Hat OpenStack Platform 16.2 How to integrate OpenStack Key Manager (barbican) with your OpenStack deployment. OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/manage_secrets_with_openstack_key_manager/index
Chapter 14. Configuring audit logging
Chapter 14. Configuring audit logging Red Hat Advanced Cluster Security for Kubernetes provides audit logging features that you can use to check all the changes made in Red Hat Advanced Cluster Security for Kubernetes. The audit log captures all the PUT and POST events, which are modifications to Red Hat Advanced Cluster Security for Kubernetes. Use this information to troubleshoot a problem or to keep a record of important events, such as changes to roles and permissions. With audit logging you get a complete picture of all normal and abnormal events that happened on Red Hat Advanced Cluster Security for Kubernetes. Note Audit logging is not enabled by default. You must enable audit logging manually. Warning Currently there is no message delivery guarantee for audit log messages. 14.1. Enabling audit logging When you enable audit logging, every time there is a modification, Red Hat Advanced Cluster Security for Kubernetes sends an HTTP POST message (in JSON format) to the configured system. Prerequisites Configure Splunk or another webhook receiver to handle Red Hat Advanced Cluster Security for Kubernetes log messages. You must have write permission enabled on the Notifiers resource for your role. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook or Splunk . Fill in the required information and turn on the Enable Audit Logging toggle. 14.2. Sample audit log message The log message has the following format: { "headers": { "Accept-Encoding": [ "gzip" ], "Content-Length": [ "586" ], "Content-Type": [ "application/json" ], "User-Agent": [ "Go-http-client/1.1" ] }, "data": { "audit": { "interaction": "CREATE", "method": "UI", "request": { "endpoint": "/v1/notifiers", "method": "POST", "source": { "requestAddr": "10.131.0.7:58276", "xForwardedFor": "8.8.8.8", }, "sourceIp": "8.8.8.8", "payload": { "@type": "storage.Notifier", "enabled": true, "generic": { "auditLoggingEnabled": true, "endpoint": "http://samplewebhookserver.com:8080" }, "id": "b53232ee-b13e-47e0-b077-1e383c84aa07", "name": "Webhook", "type": "generic", "uiEndpoint": "https://localhost:8000" } }, "status": "REQUEST_SUCCEEDED", "time": "2019-05-28T16:07:05.500171300Z", "user": { "friendlyName": "John Doe", "role": { "globalAccess": "READ_WRITE_ACCESS", "name": "Admin" }, "username": "[email protected]" } } } } The source IP address of the request is displayed in the source parameters, which makes it easier for you to investigate audit log requests and identify their origin. To determine the source IP address of a request, RHACS uses the following parameters: xForwardedFor : The X-Forwarded-For header. requestAddr : The remote address header. sourceIp : The IP address of the HTTP request. Important The determination of the source IP address depends on how you expose Central externally. You can consider the following options: If you expose Central behind a load balancer, for example, if you are running Central on Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (Amazon EKS) by using the Kubernetes External Load Balancer service type, see Preserving the client source IP . If you expose Central behind an Ingress Controller that forwards requests by using the X-Forwarded-For header , you do not need to make any configuration changes. If you expose Central with a TLS passthrough route, you cannot determine the source IP address of the client. A cluster-internal IP address is displayed in the source parameters as the source IP address of the client.
[ "{ \"headers\": { \"Accept-Encoding\": [ \"gzip\" ], \"Content-Length\": [ \"586\" ], \"Content-Type\": [ \"application/json\" ], \"User-Agent\": [ \"Go-http-client/1.1\" ] }, \"data\": { \"audit\": { \"interaction\": \"CREATE\", \"method\": \"UI\", \"request\": { \"endpoint\": \"/v1/notifiers\", \"method\": \"POST\", \"source\": { \"requestAddr\": \"10.131.0.7:58276\", \"xForwardedFor\": \"8.8.8.8\", }, \"sourceIp\": \"8.8.8.8\", \"payload\": { \"@type\": \"storage.Notifier\", \"enabled\": true, \"generic\": { \"auditLoggingEnabled\": true, \"endpoint\": \"http://samplewebhookserver.com:8080\" }, \"id\": \"b53232ee-b13e-47e0-b077-1e383c84aa07\", \"name\": \"Webhook\", \"type\": \"generic\", \"uiEndpoint\": \"https://localhost:8000\" } }, \"status\": \"REQUEST_SUCCEEDED\", \"time\": \"2019-05-28T16:07:05.500171300Z\", \"user\": { \"friendlyName\": \"John Doe\", \"role\": { \"globalAccess\": \"READ_WRITE_ACCESS\", \"name\": \"Admin\" }, \"username\": \"[email protected]\" } } } }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/configure-audit-logging
Chapter 6. Managing Satellite with Ansible collections
Chapter 6. Managing Satellite with Ansible collections Satellite Ansible Collections is a set of Ansible modules that interact with the Satellite API. You can manage and automate many aspects of Satellite with Satellite Ansible collections. 6.1. Installing the Satellite Ansible modules Use this procedure to install the Satellite Ansible modules. Procedure Install the package using the following command: 6.2. Viewing the Satellite Ansible modules You can view the installed Satellite Ansible modules by running: Alternatively, you can also see the complete list of Satellite Ansible modules and other related information at Red Hat Ansible Automation Platform . All modules are in the redhat.satellite namespace and can be referred to in the format redhat.satellite._module_name_ . For example, to display information about the activation_key module, enter the following command:
[ "satellite-maintain packages install ansible-collection-redhat-satellite", "ansible-doc -l redhat.satellite", "ansible-doc redhat.satellite.activation_key" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/managing_project_with_ansible_collections_admin
Part I. Preparing the RHEL installation
Part I. Preparing the RHEL installation Essential steps for preparing a RHEL installation environment addresses system requirements, supported architectures, and offers customization options for installation media. Additionally, it covers methods for creating bootable installation media, setting up network-based repositories, and configuring UEFI HTTP or PXE installation sources. Guidance is also included for systems using UEFI Secure Boot and for installing RHEL on 64-bit IBM Z architecture.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/preparing-the-rhel-installation
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.18/proc-providing-feedback-on-redhat-documentation
Chapter 91. workflow
Chapter 91. workflow This chapter describes the commands under the workflow command. 91.1. workflow create Create new workflow. Usage: Table 91.1. Positional arguments Value Summary definition Workflow definition file. Table 91.2. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --namespace [NAMESPACE] Namespace to create the workflow within. --public With this flag workflow will be marked as "public". Table 91.3. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.4. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.2. workflow definition show Show workflow definition. Usage: Table 91.7. Positional arguments Value Summary identifier Workflow id or name. Table 91.8. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. 91.3. workflow delete Delete workflow. Usage: Table 91.9. Positional arguments Value Summary workflow Name or id of workflow(s). Table 91.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workflow from. 91.4. workflow engine service list List all services. Usage: Table 91.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 91.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.5. workflow env create Create new environment. Usage: Table 91.16. Positional arguments Value Summary file Environment configuration file in json or yaml Table 91.17. Command arguments Value Summary -h, --help Show this help message and exit Table 91.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.6. workflow env delete Delete environment. Usage: Table 91.22. Positional arguments Value Summary environment Name of environment(s). Table 91.23. Command arguments Value Summary -h, --help Show this help message and exit 91.7. workflow env list List all environments. Usage: Table 91.24. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 91.25. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.26. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.28. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.8. workflow env show Show specific environment. Usage: Table 91.29. Positional arguments Value Summary environment Environment name Table 91.30. Command arguments Value Summary -h, --help Show this help message and exit --export Export the environment suitable for import Table 91.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.9. workflow env update Update environment. Usage: Table 91.35. Positional arguments Value Summary file Environment configuration file in json or yaml Table 91.36. Command arguments Value Summary -h, --help Show this help message and exit Table 91.37. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.38. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.39. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.10. workflow execution create Create new execution. Usage: Table 91.41. Positional arguments Value Summary workflow_identifier Workflow id or name workflow_input Workflow input params Workflow additional parameters Table 91.42. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Workflow namespace. -d DESCRIPTION, --description DESCRIPTION Execution description -s [SOURCE_EXECUTION_ID] Workflow execution id which will allow operators to create a new workflow execution based on the previously successful executed workflow. Example: mistral execution-create -s 123e4567-e89b-12d3-a456-426655440000 Table 91.43. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.44. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.45. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.46. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.11. workflow execution delete Delete execution. Usage: Table 91.47. Positional arguments Value Summary execution Id of execution identifier(s). Table 91.48. Command arguments Value Summary -h, --help Show this help message and exit --force Force the deletion of an execution. might cause a cascade of errors if used for running executions. 91.12. workflow execution input show Show execution input data. Usage: Table 91.49. Positional arguments Value Summary id Execution id Table 91.50. Command arguments Value Summary -h, --help Show this help message and exit 91.13. workflow execution list List all executions. Usage: Table 91.51. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest --task [TASK] Parent task execution id associated with workflow execution list. --rootsonly Return only root executions Table 91.52. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.53. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.54. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.55. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.14. workflow execution output show Show execution output data. Usage: Table 91.56. Positional arguments Value Summary id Execution id Table 91.57. Command arguments Value Summary -h, --help Show this help message and exit 91.15. workflow execution published show Show workflow global published variables. Usage: Table 91.58. Positional arguments Value Summary id Workflow id Table 91.59. Command arguments Value Summary -h, --help Show this help message and exit 91.16. workflow execution report show Print execution report. Usage: Table 91.60. Positional arguments Value Summary id Execution id Table 91.61. Command arguments Value Summary -h, --help Show this help message and exit --errors-only Only error paths will be included. --statistics-only Only the statistics will be included. --no-errors-only Not only error paths will be included. --max-depth [MAX_DEPTH] Maximum depth of the workflow execution tree. if 0, only the root workflow execution and its tasks will be included 91.17. workflow execution show Show specific execution. Usage: Table 91.62. Positional arguments Value Summary execution Execution identifier Table 91.63. Command arguments Value Summary -h, --help Show this help message and exit Table 91.64. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.65. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.66. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.67. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.18. workflow execution update Update execution. Usage: Table 91.68. Positional arguments Value Summary id Execution identifier Table 91.69. Command arguments Value Summary -h, --help Show this help message and exit -s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}, --state {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED} Execution state -e ENV, --env ENV Environment variables -d DESCRIPTION, --description DESCRIPTION Execution description Table 91.70. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.72. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.19. workflow list List all workflows. Usage: Table 91.74. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 91.75. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.76. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.77. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.78. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.20. workflow show Show specific workflow. Usage: Table 91.79. Positional arguments Value Summary workflow Workflow id or name. Table 91.80. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. Table 91.81. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.82. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.83. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.84. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.21. workflow update Update workflow. Usage: Table 91.85. Positional arguments Value Summary definition Workflow definition Table 91.86. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Workflow id. --namespace [NAMESPACE] Namespace of the workflow. --public With this flag workflow will be marked as "public". Table 91.87. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 91.88. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 91.89. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.90. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 91.22. workflow validate Validate workflow. Usage: Table 91.91. Positional arguments Value Summary definition Workflow definition file Table 91.92. Command arguments Value Summary -h, --help Show this help message and exit Table 91.93. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 91.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 91.95. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 91.96. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workflow create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow definition show [-h] [--namespace [NAMESPACE]] identifier", "openstack workflow delete [-h] [--namespace [NAMESPACE]] workflow [workflow ...]", "openstack workflow engine service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow env delete [-h] environment [environment ...]", "openstack workflow env list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--export] environment", "openstack workflow env update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow execution create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [-d DESCRIPTION] [-s [SOURCE_EXECUTION_ID]] [workflow_identifier] [workflow_input] [params]", "openstack workflow execution delete [-h] [--force] execution [execution ...]", "openstack workflow execution input show [-h] id", "openstack workflow execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [--task [TASK]] [--rootsonly]", "openstack workflow execution output show [-h] id", "openstack workflow execution published show [-h] id", "openstack workflow execution report show [-h] [--errors-only] [--statistics-only] [--no-errors-only] [--max-depth [MAX_DEPTH]] id", "openstack workflow execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] execution", "openstack workflow execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}] [-e ENV] [-d DESCRIPTION] id", "openstack workflow list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workflow", "openstack workflow update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/workflow
Chapter 18. Common administrative networking tasks
Chapter 18. Common administrative networking tasks Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name assigned to ports by the internal DNS. 18.1. Configuring the L2 population driver The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast. To enable the L2 Population driver, complete the following steps: 1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at least one tunneling driver: either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file: Note Neutron's Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is recommended by Red Hat for general usage. 2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2 agent: Note To install ARP reply flows, configure the arp_responder flag: 18.2. Tuning keepalived to avoid VRRP packet loss If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs, the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages. To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file, increase the VRRP advertisement interval using the ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.) You can also set values for gratuitous ARP messages: ha_vrrp_garp_master_repeat The number of gratuitous ARP messages to send at one time after the transition to the master state. (The default is 5 messages.) ha_vrrp_garp_master_delay The delay for second set of gratuitous ARP messages after the lower priority advert is received in the master state. (The default is 5 seconds.) Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Additional resources 2.1.2 Data Forwarding Rules, Subsection 2 in RFC 4541 Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 18.3. Specifying the name that DNS assigns to ports You can specify the name assigned to ports by the internal DNS when you enable the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) dns_domain for ports extension ( dns_domain_ports ). You enable the dns_domain for ports extension by declaring the RHOSP Orchestration (heat) NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding parameter, NeutronDnsDomain , you specify your domain name, which overrides the default value, openstacklocal . After redeploying your overcloud, you can use the OpenStack Client port commands, port set or port create , with --dns-name to assign a port name. Also, when the dns_domain for ports extension is enabled, the Compute service automatically populates the dns_name attribute with the hostname attribute of the instance during the boot of VM instances. At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file ( my-neutron-environment.yaml ). Note Values inside parentheses are sample values that are used in the example commands in this procedure. Substitute these sample values with ones that are appropriate for your site. Example Tip The undercloud includes a set of Orchestration service templates that form the plan for your overcloud creation. You can customize aspects of the overcloud with environment files, which are YAML-formatted files that override parameters and resources in the core Orchestration service template collection. You can include as many environment files as necessary. In the environment file, add a parameter_defaults section. Under this section, add the dns_domain for ports extension, dns_domain_ports . Example Note If you set dns_domain_ports , ensure that the deployment does not also use dns_domain , the DNS Integration extension. These extensions are incompatible, and both extensions cannot be defined simultaneously. Also in the parameter_defaults section, add your domain name ( example.com ) using the NeutronDnsDomain parameter. Example Run the openstack overcloud deploy command and include the core Orchestration templates, environment files, and this new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Log in to the overcloud, and create a new port ( new_port ) on a network ( public ). Assign a DNS name ( my_port ) to the port. Example Display the details for your port ( new_port ). Example Output Under dns_assignment , the fully qualified domain name ( fqdn ) value for the port contains a concatenation of the DNS name ( my_port ) and the domain name ( example.com ) that you set earlier with NeutronDnsDomain . Create a new VM instance ( my_vm ) using the port ( new_port ) that you just created. Example Display the details for your port ( new_port ). Example Output Note that the Compute service changes the dns_name attribute from its original value ( my_port ) to the name of the instance with which the port is associated ( my_vm ). Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide port in the Command Line Interface Reference server create in the Command Line Interface Reference 18.4. Assigning DHCP attributes to ports You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add networking functions. You can use the extra DHCP option extension ( extra_dhcp_opt ) to configure ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-server , server-ip-address , or bootfile-name to a DHCP client port. The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object contains an opt_name and an opt_value . IPv4 is the default version, but you can change this to IPv6 by including a third option, ip-version=6 . When a VM instance starts, the RHOSP Networking service supplies port information to the instance using DHCP protocol. If you add DHCP information to a port already connected to a running instance, the instance only uses the new DHCP port information when the instance is restarted. Some of the more common DHCP port attributes are: bootfile-name , dns-server , domain-name , mtu , server-ip-address , and tftp-server . For the complete set of acceptable values for opt_name , refer to the DHCP specification. Prerequisites You must have RHOSP administrator privileges. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Create a custom YAML environment file. Example Your environment file must contain the keywords parameter_defaults . Under these keywords, add the extra DHCP option extension, extra_dhcp_opt . Example Run the deployment command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Source your credentials file. Example Create a new port ( new_port ) on a network ( public ). Assign a valid attribute from the DHCP specification to the new port. Example Display the details for your port ( new_port ). Example Sample output Additional resources OVN supported DHCP options Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide port create in the Command Line Interface Reference port show in the Command Line Interface Reference 18.5. Enabling NUMA affinity on ports To enable users to create instances with NUMA affinity on the port, you must load the Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extension, port_numa_affinity_policy . Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: To enable the port_numa_affinity_policy extension, open the environment file where the NeutronPluginExtensions parameter is defined, and add port_numa_affinity_policy to the list: Add the environment file that you modified to the stack with your other environment files, and redeploy the overcloud: Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Verification Source your credentials file. Example Create a new port. When you create a port, use one of the following options to specify the NUMA affinity policy to apply to the port: --numa-policy-required - NUMA affinity policy required to schedule this port. --numa-policy-preferred - NUMA affinity policy preferred to schedule this port. --numa-policy-legacy - NUMA affinity policy using legacy mode to schedule this port. Example Display the details for your port. Example Sample output When the extension is loaded, the Value column should read, legacy , preferred or required . If the extension has failed to load, Value reads None : Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide Creating an instance with NUMA affinity on the port in the Creating and Managing Instances guide 18.6. Loading kernel modules Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded. For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre kernel module to support GRE tunneling between two VM instances. By using a special Orchestration service (heat) parameter, ExtraKernelModules , you can ensure that heat stores configuration information about the required kernel modules needed for features like GRE tunneling. Later, during normal module management, these required kernel modules are loaded. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip Heat uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file under parameter_defaults , set ExtraKernelModules to the name of the module that you want to load. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification If heat has properly loaded the module, you should see output when you run the lsmod command on the Compute node: Example Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 18.7. Configuring shared security groups When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data, you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group. You create security groups and Networking service role-based access control (RBAC) policies using the OpenStack Client. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites You have at least two RHOSP projects that you want to share. In one of the projects, the current project , you have created a security group that you want to share with another project, the target project . In this example, the ping_ssh security group is created: Example Procedure Log in to the overcloud for the current project that contains the security group. Obtain the name or ID of the target project. Obtain the name or ID of the security group that you want to share between RHOSP projects. Using the identifiers from the steps, create an RBAC policy using the openstack network rbac create command. In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e . The ID of the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 : Example --target-project specifies the project that requires access to the security group. Tip You can share data between all projects by using the --target-all-projects argument instead of --target-project <target-project> . By default, only the admin user has this privilege. --action access_as_shared specifies what the project is allowed to do. --type indicates that the target object is a security group. 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 is the ID of the particular security group which is being granted access to. The target project is able to access the security group when running the OpenStack Client security group commands, in addition to being able to bind to its ports. No other users (other than administrators and the owner) are able to access the security group. Tip To remove access for the target project, delete the RBAC policy that allows it using the openstack network rbac delete command. Additional resources Creating a security group in the Creating and Managing Instances guide security group create in the Command Line Interface Reference network rbac create in the Command Line Interface Reference
[ "[ml2] type_drivers = local,flat,vlan,gre,vxlan,geneve mechanism_drivers = l2population", "[agent] l2_population = True", "[agent] l2_population = True arp_responder = True", "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: ControllerExtraConfig: neutron::agents::l3::ha_vrrp_advert_int: 7 neutron::config::l3_agent_config: DEFAULT/ha_vrrp_garp_master_repeat: value: 5 DEFAULT/ha_vrrp_garp_master_delay: value: 5", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,dns_domain_ports\"", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,dns_domain_ports\" NeutronDnsDomain: \"example.com\"", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "source ~/overcloudrc openstack port create --network public --dns-name my_port new_port", "openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port", "+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_port.example.com', | | | hostname='my_port', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_port | | name | new_port | +-------------------------+----------------------------------------------+", "openstack server create --image rhel --flavor m1.small --port new_port my_vm", "openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port", "+-------------------------+----------------------------------------------+ | Field | Value | +-------------------------+----------------------------------------------+ | dns_assignment | fqdn='my_vm.example.com', | | | hostname='my_vm', | | | ip_address='10.65.176.113' | | dns_domain | example.com | | dns_name | my_vm | | name | new_port | +-------------------------+----------------------------------------------+", "source ~/stackrc", "vi /home/stack/templates/my-octavia-environment.yaml", "parameter_defaults: NeutronPluginExtensions: \"qos,port_security,extra_dhcp_opt\"", "openstack overcloud deploy --templates -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /home/stack/templates/my-octavia-environment.yaml", "source ~/overcloudrc", "openstack port create --extra-dhcp-option name=domain-name,value=test.domain --extra-dhcp-option name=ntp-server,value=192.0.2.123 --network public new_port", "openstack port show new_port -c extra_dhcp_opts", "+-----------------+-----------------------------------------------------------------+ | Field | Value | +-----------------+-----------------------------------------------------------------+ | extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' | | | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' | +-----------------+-----------------------------------------------------------------+", "source ~/stackrc", "parameter_defaults: NeutronPluginExtensions: \"qos,port_numa_affinity_policy\"", "openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/<custom_environment_file>.yaml", "source ~/overcloudrc", "openstack port create --network public --numa-policy-legacy myNUMAAffinityPort", "openstack port show myNUMAAffinityPort -c numa_affinity_policy", "+----------------------+--------+ | Field | Value | +----------------------+--------+ | numa_affinity_policy | legacy | +----------------------+--------+", "vi /home/stack/templates/my-modules-environment.yaml", "ComputeParameters: ExtraKernelModules: nf_conntrack_proto_gre: {} ControllerParameters: ExtraKernelModules: nf_conntrack_proto_gre: {}", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-environment.yaml", "sudo lsmod | grep nf_conntrack_proto_gre", "openstack security group create ping_ssh", "openstack project list", "openstack security group list", "openstack network rbac create --target-project 32016615de5d43bb88de99e7f2e26a1e --action access_as_shared --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/common-network-tasks_rhosp-network
Developing C and C++ applications in RHEL 8
Developing C and C++ applications in RHEL 8 Red Hat Enterprise Linux 8 Setting up a developer workstation, and developing and debugging C and C++ applications in Red Hat Enterprise Linux 8 Red Hat Customer Content Services
[ "subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-source-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-source-rpms", "yum install git", "git config --global user.name \" Full Name \" git config --global user.email \" [email protected] \"", "git config --global core.editor command", "man git man gittutorial man gittutorial-2", "yum group install \"Development Tools\"", "yum install llvm-toolset", "yum install gcc-gfortran", "yum install gdb valgrind systemtap ltrace strace", "yum install yum-utils", "stap-prep", "yum install perf papi pcp-zeroconf valgrind strace sysstat systemtap", "stap-prep", "systemctl enable pmcd && systemctl start pmcd", "man gcc", "gcc -c source.c another_source.c", "gcc ... -g", "man gcc", "man gcc", "gcc ... -O2 -g -Wall -Wl,-z,now,-z,relro -fstack-protector-strong -fstack-clash-protection -D_FORTIFY_SOURCE=2", "gcc ... -Walloc-zero -Walloca-larger-than -Wextra -Wformat-security -Wvla-larger-than", "gcc ... objfile.o another_object.o ... -o executable-file", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc hello.c -o helloworld", "./helloworld Hello, World!", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc -c hello.c", "gcc hello.o -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ hello.cpp -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ -c hello.cpp", "g++ hello.o -o helloworld", "./helloworld Hello, World!", "gcc ... -l foo", "gcc ... -I include_path", "gcc ... -L library_path -l foo", "gcc ... -I header_path -c", "gcc ... -L library_path -l foo", "./program", "gcc ... -L library_path -l foo", "gcc ... -L library_path -l foo -Wl,-rpath= library_path", "./program", "export LD_LIBRARY_PATH= library_path :USDLD_LIBRARY_PATH ./program", "man ld.so", "cat /etc/ld.so.conf", "ldconfig -v", "gcc ... path/to/libfoo.a", "gcc ... -Wl,-Bstatic -l first -Wl,-Bdynamic -l second", "gcc ... -l foo", "objdump -p somelibrary | grep SONAME", "gcc ... -c -fPIC some_file.c", "gcc -shared -o libfoo.so.x.y -Wl,-soname, libfoo.so.x some_file.o", "cp libfoo.so.x.y /usr/lib64", "ln -s libfoo.so.x.y libfoo.so.x ln -s libfoo.so.x libfoo.so", "gcc -c source_file.c", "ar rcs lib foo .a source_file.o", "nm libfoo.a", "gcc ... -l foo", "man ar", "all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "mkdir hellomake cd hellomake", "#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "make gcc -c -Wall hello.c -o hello.o gcc hello.o -o hello", "./hello Hello, World!", "make clean rm -rf hello.o hello", "man make info make", "man gcc", "gcc ... -g", "man gcc", "gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)", "(gdb) q", "dnf debuginfo-install coreutils-8.30-6.el8.x86_64", "which less /usr/bin/less", "locate libz | grep so /usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.11", "yum install mlocate updatedb", "rpm -qf /usr/lib64/libz.so.1.2.7 zlib-1.2.11-10.el8.x86_64", "debuginfo-install zlib-1.2.11-10.el8.x86_64", "gdb program", "ps -C program -o pid h pid", "gdb -p pid", "(gdb) shell ps -C program -o pid h pid", "(gdb) attach pid", "(gdb) file path/to/program", "(gdb) help info", "(gdb) br file:line", "(gdb) br line", "(gdb) br function_name", "(gdb) br file:line if condition", "(gdb) info br", "(gdb) delete number", "(gdb) clear file:line", "(gdb) watch expression", "(gdb) rwatch expression", "(gdb) awatch expression", "(gdb) info br", "(gdb) delete num", "strace -fvttTyy -s 256 -e trace= call program", "ps -C program (...) strace -fvttTyy -s 256 -e trace= call -p pid", "strace ... |& tee your_log_file.log", "man strace", "ltrace -f -l library -e function program", "ps -C program (...) ltrace -f -l library -e function program -p pid", "ltrace ... |& tee your_log_file.log", "man ltrace", "ps -aux", "stap /usr/share/systemtap/examples/process/strace.stp -x pid", "(gdb) catch syscall syscall-name", "(gdb) r", "(gdb) c", "(gdb) catch signal signal-type", "(gdb) r", "(gdb) c", "ulimit -a", "DumpCore=yes DefaultLimitCORE=infinity", "systemctl daemon-reexec", "ulimit -c unlimited", "yum install sos", "sosreport", "coredumpctl list executable-name coredumpctl dump executable-name > /path/to/file-for-export", "eu-unstrip -n --core= ./core.9814 0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2", "eu-readelf -n executable_file", "gdb -e executable_file -c core_file", "(gdb) symbol-file program.debug", "sysctl kernel.core_pattern", "kernel.core_pattern = |/usr/lib/systemd/systemd-coredump", "pgrep -a executable-name-fragment", "PID command-line", "pgrep -a bc 5459 bc", "kill -ABRT PID", "coredumpctl list PID", "coredumpctl list 5459 TIME PID UID GID SIG COREFILE EXE Thu 2019-11-07 15:14:46 CET 5459 1000 1000 6 present /usr/bin/bc", "coredumpctl info PID", "coredumpctl debug PID", "Missing separate debuginfos, use: dnf debuginfo-install bc-1.07.1-5.el8.x86_64", "coredumpctl dump PID > /path/to/file_for_export", "ps -C some-program", "gcore -o filename pid", "sosreport", "(gdb) set use-coredump-filter off", "(gdb) set dump-excluded-mappings on", "(gdb) gcore core-file", "gdbserver --multi :1234 gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /*' /*", "gdbserver --multi :1234 gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /*' /bin /boot (...) /tmp /usr /var", "(gdb) maintenance print symbols /tmp/out main.c", "maint print symbols [-pc address ] [--] [ filename ] maint print symbols [-objfile objfile ] [-source source ] [--] [ filename ] maint print psymbols [-objfile objfile ] [-pc address ] [--] [ filename ] maint print psymbols [-objfile objfile ] [-source source ] [--] [ filename ] maint print msymbols [-objfile objfile ] [--] [ filename ]", "debuginfo-install coreutils gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring USD_thread' -ex 'inferior 1' -ex 'pring USD_thread' (...) Id Target Id Frame * 2 process 203923 \"echo\" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109 1 process 203914 \"echo\" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109 USD1 = 2 (...) USD2 = 1", "dnf debuginfo-install coreutils gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring USD_thread' -ex 'inferior 1' -ex 'pring USD_thread' (...) Id Target Id Frame 1.1 process 4106488 \"echo\" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109 * 2.1 process 4106494 \"echo\" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109 USD1 = 1 (...) USD2 = 1", "yum install gcc-toolset- N", "yum list available gcc-toolset- N -\\*", "yum install package_name", "yum install gcc-toolset-13-annobin-annocheck gcc-toolset-13-binutils-devel", "yum remove gcc-toolset- N \\*", "scl enable gcc-toolset- N tool", "scl enable gcc-toolset- N bash", "scl enable gcc-toolset-9 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-9 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-9 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-9 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-10 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-10 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-10 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-10 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-11 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-11 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-11 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-11 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-12 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-12 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-12 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-12 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-13 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-13 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-13 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-13 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-14 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-14 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-14 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-14 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin ln -s annobin.so gcc-annobin.so", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel8/gcc-toolset- <toolset_version> -toolchain", "podman images", "podman run -it image_name /bin/bash", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel8/gcc-toolset-14-toolchain", "podman run -it registry.redhat.io/rhel8/gcc-toolset-14-toolchain /bin/bash", "bash-4.4USD gcc -v gcc version 14.2.1 20240801 (Red Hat 14.2.1-1) (GCC)", "bash-4.4USD rpm -qa", "gcc -fplugin=annobin", "gcc -iplugindir= /path/to/directory/containing/annobin/", "gcc --print-file-name=plugin", "clang -fplugin= /path/to/directory/containing/annobin/", "gcc -fplugin=annobin -fplugin-arg-annobin- option file-name", "gcc -fplugin=annobin -fplugin-arg-annobin-verbose file-name", "clang -fplugin= /path/to/directory/containing/annobin/ -Xclang -plugin-arg-annobin -Xclang option file-name", "clang -fplugin=/usr/lib64/clang/10/lib/annobin.so -Xclang -plugin-arg-annobin -Xclang verbose file-name", "annocheck file-name", "annocheck directory-name", "annocheck rpm-package-name", "annocheck rpm-package-name --debug-rpm debuginfo-rpm", "annocheck --enable-built-by", "annocheck --enable-notes", "annocheck --section-size= name", "annocheck --enable-notes --disable-hardened file-name", "objcopy --merge-notes file-name", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/developing_c_and_cpp_applications_in_rhel_8/index
Chapter 137. AutoRestartStatus schema reference
Chapter 137. AutoRestartStatus schema reference Used in: KafkaConnectorStatus , KafkaMirrorMaker2Status Property Property type Description count integer The number of times the connector or task is restarted. connectorName string The name of the connector being restarted. lastRestartTimestamp string The last time the automatic restart was attempted. The required format is 'yyyy-MM-ddTHH:mm:ssZ' in the UTC time zone.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-AutoRestartStatus-reference
7.138. man-pages
7.138. man-pages 7.138.1. RHBA-2013:0447 - man-pages bug fix and enhancement update An updated man-pages package that fixes numerous bugs and add two enhancements is now available for Red Hat Enterprise Linux 6. The man-pages package provides man (manual) pages from the Linux Documentation Project (LDP). Bug Fixes BZ# 714073 Prior to this update, a manual page for the fattach() function was missing. This update adds the fattach(2) manual page. BZ# 714074 Prior to this update, a manual page for the recvmmsg() call was missing. This update adds the recvmmsg(2) manual page. BZ# 714075 Prior to this update, manual pages for the cciss and hpsa utilities were missing. This update adds the cciss(4) and hpsa(4) manual pages. BZ# 714078 The host.conf(5) manual page contained a description for the unsupported order keyword. This update removes the incorrect description. BZ# 735789 Prior to this update, the clock_gettime(2) , clock_getres(2) , and clock_nanosleep(2) manual pages did not mention the -lrt option. With this update, the description of the -lrt option has been added to the aforementioned manual pages. BZ# 745152 This update adds the description of the single-request-reopen to the resolv.conf(5) manual page. BZ# 745501 With this update, usage of SSSD in the nsswitch.conf file is now described in the nsswitch.conf(5) manual page. BZ# 745521 With this update, the new UMOUNT_NOFOLLOW flag is described in the umount(2) manual page. BZ#745733 Previously, a manual page for the sendmmsg() function was missing. This update adds the sendmmsg(2) manual page. BZ# 752778 Previously, the db(3) manual page was pointing to the non-existent dbopen(3) manual page. When the man db command was issued, the following error message was returned: With this update, the db(3) manual page is removed. BZ#771540 This update adds the missing description of the TCP_CONGESTION socket option to the tcp(7) manual page. BZ# 804003 Descriptions of some socket options were missing in the ip(7) manual page. This update adds these descriptions to the ip(7) manual page. BZ#809564 Prior to this update, the shmat(2) manual page was missing the description for the EIDRM error code. With this update, this description has been added to the shmat(2) manual page. BZ# 822317 The bdflush(2) system call manual page was missing information that this system call is obsolete. This update adds this information to the bdflush(2) manual page. BZ#835679 The nscd.conf(5) manual page was not listing " services " among valid services. With this update, " services " are listed in the nscd.conf(5) manual page as expected. BZ#840791 Previously, the nsswitch.conf(5) manual page lacked information on the search mechanism, particularly about the notfound status. This update provides an improved manual page with added description of notfound . BZ#840796 Prior to this update, the behavior of the connect() call with the local address set to the INADDR_ANY wildcard address was insufficiently described in the ip(7) manual page. Possible duplication of the local port after the call was not acknowledged. With this update, the documentation has been reworked in order to reflect the behavior of the connect() call correctly. BZ#840798 Due to the vague description of the getdents() function in the getdents(2) manual page, the risk of using this function directly was not clear enough. The description has been extended with a warning to prevent incorrect usage of the getdents() function. BZ#840805 The nscd.conf(5) manual page was missing descriptions and contained several duplicate entries. With this update, the text has been clarified and redundant entries have been removed. BZ#857163 Previously, the tzset(3) manual page contained an incorrect interval in the description of the start and end format for Daylight Saving Time. Consequently, users thought the number was 1-based rather than 0-based when not using the J option. With this update, the manual page has been corrected. The Julian day can be specified with an interval of 0 to 365 and February 29 is counted in leap years when the J option is not used. BZ# 857962 The description of the /proc/sys/fs/file-nr file in the proc(5) manual page was outdated. This update adds the current information to this manual page. BZ# 858278 The connect(2) manual page in the Error section listed EAGAIN error code instead of EADDRNOTAVAIL error code. This update amends the manual page with correct information. Enhancements BZ# 857162 An update in the close(2) man page explains the interaction between system calls close() and recv() in different threads. BZ# 858240 This update adds the description of the --version switch to the zdump(8) manual page. All users of man-pages are advised to upgrade to this updated package, which fixes these bugs and add these enhancements.
[ "fopen: No such file or directory." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/man-pages
1.5. Starting and Stopping a Directory Server Instance
1.5. Starting and Stopping a Directory Server Instance 1.5.1. Starting and Stopping a Directory Server Instance Using the Command Line Use the dsctl utility to start, stop, or restart an instance: To start the instance: To stop the instance: To restart the instance: Optionally, you can enable Directory Server instances to automatically start when the system boots: For a single instance: For all instances on a server: For further details, see the Managing System Services section in the Red Hat System Administrator's Guide . 1.5.2. Starting and Stopping a Directory Server Instance Using the Web Console As an alternative to command line, you can use the web console to start, stop, or restart instances. To start, stop, or restart a Directory Server instance: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Click the Actions button and select the action to execute: Start Instance Stop Instance Restart Instance
[ "dsctl instance_name start", "dsctl instance_name stop", "dsctl instance_name restart", "systemctl enable dirsrv@ instance_name", "systemctl enable dirsrv.target" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/starting_and_stopping-ds
Providing Feedback on Red Hat Documentation
Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_red_hat_satellite_to_use_ansible/providing-feedback-on-red-hat-documentation_ansible
Monitoring
Monitoring OpenShift Container Platform 4.12 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/index
4.4. Results Translation Extension
4.4. Results Translation Extension The JDBCExecutionFactory provides several methods to modify the java.sql.Statement and java.sql.ResultSet interactions, including: Overriding the createXXXExecution to subclass the corresponding JDBCXXXExecution. The JDBCBaseExecution has protected methods to get the appropriate statement (getStatement, getPreparedStatement, getCallableStatement) and to bind prepared statement values bindPreparedStatementValues. Retrieve values from the JDBC ResultSet or CallableStatement - see the retrieveValue methods.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/results_translation_extension
Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks
Chapter 21. Ensuring the presence of host-based access control rules in IdM using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. It includes support for Identity Management (IdM). Learn more about Identity Management (IdM) host-based access policies and how to define them using Ansible . 21.1. Host-based access control rules in IdM Host-based access control (HBAC) rules define which users or user groups can access which hosts or host groups by using which services or services in a service group. As a system administrator, you can use HBAC rules to achieve the following goals: Limit access to a specified system in your domain to members of a specific user group. Allow only a specific service to be used to access systems in your domain. By default, IdM is configured with a default HBAC rule named allow_all , which means universal access to every host for every user via every relevant service in the entire IdM domain. You can fine-tune access to different hosts by replacing the default allow_all rule with your own set of HBAC rules. For centralized and simplified access control management, you can apply HBAC rules to user groups, host groups, or service groups instead of individual users, hosts, or services. 21.2. Ensuring the presence of an HBAC rule in IdM using an Ansible playbook Follow this procedure to ensure the presence of a host-based access control (HBAC) rule in Identity Management (IdM) using an Ansible playbook. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users and user groups you want to use for your HBAC rule exist in IdM. See Managing user accounts using Ansible playbooks and Ensuring the presence of IdM groups and group members using Ansible playbooks for details. The hosts and host groups to which you want to apply your HBAC rule exist in IdM. See Managing hosts using Ansible playbooks and Managing host groups using Ansible playbooks for details. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create your Ansible playbook file that defines the HBAC policy whose presence you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/hbacrule/ensure-hbacrule-allhosts-present.yml file: Run the playbook: Verification Log in to the IdM Web UI as administrator. Navigate to Policy Host-Based-Access-Control HBAC Test . In the Who tab, select idm_user. In the Accessing tab, select client.idm.example.com . In the Via service tab, select sshd . In the Rules tab, select login . In the Run test tab, click the Run test button. If you see ACCESS GRANTED, the HBAC rule is implemented successfully. Additional resources See the README-hbacsvc.md , README-hbacsvcgroup.md , and README-hbacrule.md files in the /usr/share/doc/ansible-freeipa directory. See the playbooks in the subdirectories of the /usr/share/doc/ansible-freeipa/playbooks directory.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/ensuring-the-presence-of-host-based-access-control-rules-in-idm-using-ansible-playbooks_using-ansible-to-install-and-manage-idm
7.17. bridge-utils
7.17. bridge-utils 7.17.1. RHEA-2013:0322 - bridge-utils enhancement update Updated bridge-utils packages that add two enhancements are now available for Red Hat Enterprise Linux 6. The bridge-utils packages contain utilities for configuration of the Linux Ethernet bridge. The Linux Ethernet bridge can be used to connect multiple Ethernet devices together. This connection is fully transparent: hosts connected to one Ethernet device see hosts connected to the other Ethernet devices directly. Enhancements BZ# 676355 The man page was missing the multicast option descriptions. This update adds that information to the man page. BZ#690529 This enhancement adds the missing feature described in the BRCTL(8) man page, that allows the user to get the bridge information for a simple bridge using the "brctl show USDBRIDGE" command. All users of bridge-utils are advise to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/bridge-utils
Chapter 32. JPA
Chapter 32. JPA Since Camel 1.0 Both producer and consumer are supported. The JPA component enables you to store and retrieve Java objects from persistent storage using EJB 3's Java Persistence Architecture (JPA). Java Persistence Architecture (JPA) is a standard interface layer that wraps Object/Relational Mapping (ORM) products such as OpenJPA, Hibernate, TopLink. Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jpa</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency> 32.1. Sending to the endpoint You can store a Java entity bean in a database by sending it to a JPA producer endpoint. The body of the In message is assumed to be an entity bean (that is, a POJO with an @Entity annotation on it) or a collection or array of entity beans. If the body is a List of entities, use entityType=java.util.List as a configuration passed to the producer endpoint. If the body does not contain one of the listed types, put a Message Translator before the endpoint to perform the necessary conversion first. You can use query , namedQuery or nativeQuery for the producer as well. For the value of the parameters , you can use the Simple expression which allows you to retrieve parameter values from Message body, header and etc. Those query can be used for retrieving a set of data with using SELECT JPQL/SQL statement as well as executing bulk update/delete with using UPDATE / DELETE JPQL/SQL statement. Please note that you need to specify useExecuteUpdate to true if you execute UPDATE / DELETE with namedQuery as camel doesn't look into the named query unlike query and nativeQuery . 32.2. Consuming from the endpoint Consuming messages from a JPA consumer endpoint removes (or updates) entity beans in the database. This allows you to use a database table as a logical queue: consumers take messages from the queue and then delete/update them to logically remove them from the queue. If you do not wish to delete the entity bean when it has been processed (and when routing is done), you can specify consumeDelete=false on the URI. This will result in the entity being processed each poll. If you would rather perform some update on the entity to mark it as processed (such as to exclude it from a future query) then you can annotate a method with @Consumed which will be invoked on your entity bean when the entity bean when it has been processed (and when routing is done). You can use @PreConsumed which will be invoked on your entity bean before it has been processed (before routing). If you are consuming a lot (100K+) of rows and experience OutOfMemory problems you should set the maximumResults to sensible value. 32.3. URI format For sending to the endpoint, the entityClassName is optional. If specified, it helps the Type Converter to ensure the body is of the correct type. For consuming, the entityClassName is mandatory. 32.4. Configuring Options Camel components are configured on two separate levels: component level endpoint level 32.4.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 32.4.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 32.4.3. Component Options The JPA component supports 9 options, which are listed below. Name Description Default Type aliases (common) Maps an alias to a JPA entity class. The alias can then be used in the endpoint URI (instead of the fully qualified class name). Map entityManagerFactory (common) To use the EntityManagerFactory. This is strongly recommended to configure. EntityManagerFactory joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean sharedEntityManager (common) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean transactionManager (common) To use the PlatformTransactionManager for managing transactions. PlatformTransactionManager transactionStrategy (common) To use the TransactionStrategy for running the operations in a transaction. TransactionStrategy bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 32.4.4. Endpoint Options The JPA endpoint is configured using URI syntax: with the following path and query parameters: 32.4.4.1. Path Parameters (1 parameters) Name Description Default Type entityType (common) Required Entity class name. Class 32.4.4.2. Query Parameters (44 parameters) Name Description Default Type joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean maximumResults (common) Set the maximum number of results to retrieve on the Query. -1 int namedQuery (common) To use a named query. String nativeQuery (common) To use a custom native query. You may want to use the option resultClass also when using native queries. String persistenceUnit (common) Required The JPA persistence unit used by default. camel String query (common) To use a custom query. String resultClass (common) Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an affect when using in conjunction with native query when consuming data. Class consumeDelete (consumer) If true, the entity is deleted after it is consumed; if false, the entity is not deleted. true boolean consumeLockEntity (consumer) Specifies whether or not to set an exclusive lock on each entity bean while processing the results from polling. true boolean deleteHandler (consumer) To use a custom DeleteHandler to delete the row after the consumer is done processing the exchange. DeleteHandler lockModeType (consumer) To configure the lock mode on the consumer. Enum values: READ WRITE OPTIMISTIC OPTIMISTIC_FORCE_INCREMENT PESSIMISTIC_READ PESSIMISTIC_WRITE PESSIMISTIC_FORCE_INCREMENT NONE PESSIMISTIC_WRITE LockModeType maxMessagesPerPoll (consumer) An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable. int preDeleteHandler (consumer) To use a custom Pre-DeleteHandler to delete the row after the consumer has read the entity. DeleteHandler sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean skipLockedEntity (consumer) To configure whether to use NOWAIT on lock and silently skip the entity. false boolean transacted (consumer) Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only rollback the last failed message. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern parameters (consumer (advanced)) This key/value mapping is used for building the query parameters. It is expected to be of the generic type java.util.Map where the keys are the named parameters of a given JPA query and the values are their corresponding effective values you want to select for. When it's used for producer, Simple expression can be used as a parameter value. It allows you to retrieve parameter values from the message body, header and etc. Map pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy findEntity (producer) If enabled then the producer will find a single entity by using the message body as key and entityType as the class type. This can be used instead of a query to find a single entity. false boolean flushOnSend (producer) Flushes the EntityManager after the entity bean has been persisted. true boolean remove (producer) Indicates to use entityManager.remove(entity). false boolean useExecuteUpdate (producer) To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need to specify this option to 'true'. Boolean usePersist (producer) Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity). Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager has to execute an UPDATE instead of an INSERT query)!. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean usePassedInEntityManager (producer (advanced)) If set to true, then Camel will use the EntityManager from the header JpaConstants.ENTITY_MANAGER instead of the configured entity manager on the component/endpoint. This allows end users to control which entity manager will be in use. false boolean entityManagerProperties (advanced) Additional properties for the entity manager to use. Map sharedEntityManager (advanced) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 32.5. Message Headers The JPA component supports 2 message header(s), which are listed below: Name Description Default Type CamelEntityManager (common) Constant: ENTITY_MANAGER The JPA EntityManager object. EntityManager CamelJpaParameters (producer) Constant: link: JPA_PARAMETERS_HEADER Alternative way for passing query parameters as an Exchange header. Map 32.6. Configuring EntityManagerFactory It is recommended to configure the JPA component to use a specific EntityManagerFactory instance. If failed to do so each JpaEndpoint will auto create their own instance of EntityManagerFactory which most often is not what you want. For example, you can instantiate a JPA component that references the myEMFactory entity manager factory, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> </bean> The JpaComponent automatically looks up the EntityManagerFactory from the Registry which means you do not need to configure this on the JpaComponent as shown above. You only need to do so if there is ambiguity, in which case Camel will log a WARN. 32.7. Configuring TransactionManager The JpaComponent automatically looks up the TransactionManager from the Registry. If Camel won't find any TransactionManager instance registered, it will also look up for the TransactionTemplate and try to extract TransactionManager from it. If none TransactionTemplate is available in the registry, JpaEndpoint will auto create their own instance of TransactionManager which most often is not what you want. If more than single instance of the TransactionManager is found, Camel will log a WARN. In such cases you might want to instantiate and explicitly configure a JPA component that references the myTransactionManager transaction manager, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> <property name="transactionManager" ref="myTransactionManager"/> </bean> 32.8. Using a consumer with a named query For consuming only selected entities, you can use the namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a consumer uri as shown below: from("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1") .to("bean:myBusinessLogic"); 32.9. Using a consumer with a query For consuming only selected entities, you can use the query URI query option. You only have to define the query option: from("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1") .to("bean:myBusinessLogic"); 32.10. Using a consumer with a native query For consuming only selected entities, you can use the nativeQuery URI query option. You only have to define the native query option: from("jpa://org.apache.camel.examples.MultiSteps?nativeQuery=select * from MultiSteps where step = 1") .to("bean:myBusinessLogic"); If you use the native query option, you will receive an object array in the message body. 32.11. Using a producer with a named query For retrieving selected entities or execute bulk update/delete, you can use the namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a producer uri as shown below: from("direct:namedQuery") .to("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1"); Note that you need to specify useExecuteUpdate option to true to execute UPDATE / DELETE statement as a named query. 32.12. Using a producer with a query For retrieving selected entities or execute bulk update/delete, you can use the query URI query option. You only have to define the query option: from("direct:query") .to("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1"); 32.13. Using a producer with a native query For retrieving selected entities or execute bulk update/delete, you can use the nativeQuery URI query option. You only have to define the native query option: from("direct:nativeQuery") .to("jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1"); If you use the native query option without specifying resultClass , you will receive an object array in the message body. 32.14. Using the JPA-Based Idempotent Repository The Idempotent Consumer from the EIP patterns is used to filter out duplicate messages. A JPA-based idempotent repository is provided. To use the JPA based idempotent repository. Procedure Set up a persistence-unit in the persistence.xml file. Set up a org.springframework.orm.jpa.JpaTemplate which is used by the org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository . Configure the error formatting macro: snippet: java.lang.IndexOutOfBoundsException: Index: 20, Size: 20 Configure the idempotent repository as org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository . Create the JPA idempotent repository in the Spring XML file as shown below: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="JpaMessageIdRepositoryTest"> <from uri="direct:start" /> <idempotentConsumer idempotentRepository="jpaStore"> <header>messageId</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> When running this Camel component tests inside your IDE If you run the tests of this component directly inside your IDE, and not through Maven, then you could see exceptions like these: The problem here is that the source has been compiled or recompiled through your IDE and not through Maven, which would enhance the byte-code at build time . To overcome this you need to enable dynamic byte-code enhancement of OpenJPA . For example, assuming the current OpenJPA version being used in Camel is 2.2.1, to run the tests inside your IDE you would need to pass the following argument to the JVM: 32.15. Spring Boot Auto-Configuration When using jpa with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jpa-starter</artifactId> </dependency> The component supports 10 options, which are listed below. Name Description Default Type camel.component.jpa.aliases Maps an alias to a JPA entity class. The alias can then be used in the endpoint URI (instead of the fully qualified class name). Map camel.component.jpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.jpa.enabled Whether to enable auto configuration of the jpa component. This is enabled by default. Boolean camel.component.jpa.entity-manager-factory To use the EntityManagerFactory. This is strongly recommended to configure. The option is a javax.persistence.EntityManagerFactory type. EntityManagerFactory camel.component.jpa.join-transaction The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true Boolean camel.component.jpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jpa.shared-entity-manager Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false Boolean camel.component.jpa.transaction-manager To use the PlatformTransactionManager for managing transactions. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.jpa.transaction-strategy To use the TransactionStrategy for running the operations in a transaction. The option is a org.apache.camel.component.jpa.TransactionStrategy type. TransactionStrategy
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jpa</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency>", "jpa:entityClassName[?options]", "jpa:entityType", "<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> </bean>", "<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> <property name=\"transactionManager\" ref=\"myTransactionManager\"/> </bean>", "@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }", "from(\"jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1\") .to(\"bean:myBusinessLogic\");", "from(\"jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\") .to(\"bean:myBusinessLogic\");", "from(\"jpa://org.apache.camel.examples.MultiSteps?nativeQuery=select * from MultiSteps where step = 1\") .to(\"bean:myBusinessLogic\");", "@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }", "from(\"direct:namedQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1\");", "from(\"direct:query\") .to(\"jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\");", "from(\"direct:nativeQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"JpaMessageIdRepositoryTest\"> <from uri=\"direct:start\" /> <idempotentConsumer idempotentRepository=\"jpaStore\"> <header>messageId</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>", "org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is <openjpa-2.2.1-r422266:1396819 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: \"org.apache.camel.examples.SendEmail\". at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:427) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127) at org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96) at org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67) at org.apache.camel.test.junit5.CamelTestSupport.doSetUp(CamelTestSupport.java:238) at org.apache.camel.test.junit5.CamelTestSupport.setUp(CamelTestSupport.java:208)", "-javaagent:<path_to_your_local_m2_cache>/org/apache/openjpa/openjpa/2.2.1/openjpa-2.2.1.jar", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jpa-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jpa-component-starter
Chapter 98. Managing DNS records in IdM
Chapter 98. Managing DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM). As an IdM administrator, you can add, modify and delete DNS records in IdM. The chapter contains the following sections: DNS records in IdM Adding DNS resource records from the IdM Web UI Adding DNS resource records from the IdM CLI Common ipa dnsrecord-add options Deleting DNS records in the IdM Web UI Deleting an entire DNS record in the IdM Web UI Deleting DNS records in the IdM CLI Prerequisites Your IdM deployment contains an integrated DNS server. For information how to install IdM with integrated DNS, see one of the following links: Installing an IdM server: With integrated DNS, with an integrated CA as the root CA . Installing an IdM server: With integrated DNS, with an external CA as the root CA . 98.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 98.2. Adding DNS resource records in the IdM Web UI Follow this procedure to add DNS resource records in the Identity Management (IdM) Web UI. Prerequisites The DNS zone to which you want to add a DNS record exists and is managed by IdM. For more information about creating a DNS zone in IdM DNS, see Managing DNS zones in IdM . You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the DNS zone to which you want to add a DNS record. In the DNS Resource Records section, click Add to add a new record. Figure 98.1. Adding a New DNS Resource Record Select the type of record to create and fill out the other fields as required. Figure 98.2. Defining a New DNS Resource Record Click Add to confirm the new record. 98.3. Adding DNS resource records from the IdM CLI Follow this procedure to add a DNS resource record of any type from the command-line interface (CLI). Prerequisites The DNS zone to which you want to add a DNS records exists. For more information about creating a DNS zone in IdM DNS, see Managing DNS zones in IdM . You are logged in as IdM administrator. Procedure To add a DNS resource record, use the ipa dnsrecord-add command. The command follows this syntax: In the command above: The zone_name is the name of the DNS zone to which the record is being added. The record_name is an identifier for the new DNS resource record. For example, to add an A type DNS record of host1 to the idm.example.com zone, enter: 98.4. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 98.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 98.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 98.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 98.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 98.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 98.5. Deleting DNS records in the IdM Web UI Follow this procedure to delete DNS records in Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the zone from which you want to delete a DNS record, for example example.com. . In the DNS Resource Records section, click the name of the resource record. Figure 98.3. Selecting a DNS Resource Record Select the check box by the name of the record type to delete. Click Delete . Figure 98.4. Deleting a DNS Resource Record The selected record type is now deleted. The other configuration of the resource record is left intact. Additional resources Deleting an entire DNS record in the IdM Web UI 98.6. Deleting an entire DNS record in the IdM Web UI Follow this procedure to delete all the records for a particular resource in a zone using the Identity Management (IdM) Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the zone from which you want to delete a DNS record, for example zone.example.com. . In the DNS Resource Records section, select the check box of the resource record to delete. Click Delete . Figure 98.5. Deleting an Entire Resource Record The entire resource record is now deleted. 98.7. Deleting DNS records in the IdM CLI Follow this procedure to remove DNS records from a zone managed by the Identity Management (IdM) DNS. Prerequisites You are logged in as IdM administrator. Procedure To remove records from a zone, use the ipa dnsrecord-del command and add the -- recordType -rec option together with the record value. For example, to remove an A type record: If you run ipa dnsrecord-del without any options, the command prompts for information about the record to delete. Note that passing the --del-all option with the command removes all associated records for the zone. Additional resources Run the ipa dnsrecord-del --help command. 98.8. Additional resources See Using Ansible to manage DNS records in IdM .
[ "ipa dnsrecord-add zone_name record_name -- record_type_option=data", "ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123", "ipa dnsrecord-del example.com www --a-rec 192.0.2.1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-dns-records-in-idm_configuring-and-managing-idm
Chapter 6. Managing image streams
Chapter 6. Managing image streams Image streams provide a means of creating and updating container images in an on-going way. As improvements are made to an image, tags can be used to assign new version numbers and keep track of changes. This document describes how image streams are managed. 6.1. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application does not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. 6.2. Configuring image streams An ImageStream object file contains the following elements. Imagestream object definition apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5 1 The name of the image stream. 2 Docker repository path where new images can be pushed to add or update them in this image stream. 3 The SHA identifier that this image stream tag currently references. Resources that reference this image stream tag use this identifier. 4 The SHA identifier that this image stream tag previously referenced. Can be used to rollback to an older image. 5 The image stream tag name. 6.3. Image stream images An image stream image points from within an image stream to a particular image ID. Image stream images allow you to retrieve metadata about an image from a particular image stream where it is tagged. Image stream image objects are automatically created in OpenShift Container Platform whenever you import or tag an image into the image stream. You should never have to explicitly define an image stream image object in any image stream definition that you use to create image streams. The image stream image consists of the image stream name and image ID from the repository, delimited by an @ sign: To refer to the image in the ImageStream object example, the image stream image looks like: 6.4. Image stream tags An image stream tag is a named pointer to an image in an image stream. It is abbreviated as istag . An image stream tag is used to reference or retrieve an image for a given image stream and tag. Image stream tags can reference any local or externally managed image. It contains a history of images represented as a stack of all images the tag ever pointed to. Whenever a new or existing image is tagged under particular image stream tag, it is placed at the first position in the history stack. The image previously occupying the top position is available at the second position. This allows for easy rollbacks to make tags point to historical images again. The following image stream tag is from an ImageStream object: Image stream tag with two images in its history kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream # ... tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest # ... Image stream tags can be permanent tags or tracking tags. Permanent tags are version-specific tags that point to a particular version of an image, such as Python 3.5. Tracking tags are reference tags that follow another image stream tag and can be updated to change which image they follow, like a symlink. These new levels are not guaranteed to be backwards-compatible. For example, the latest image stream tags that ship with OpenShift Container Platform are tracking tags. This means consumers of the latest image stream tag are updated to the newest level of the framework provided by the image when a new level becomes available. A latest image stream tag to v3.10 can be changed to v3.11 at any time. It is important to be aware that these latest image stream tags behave differently than the Docker latest tag. The latest image stream tag, in this case, does not point to the latest image in the Docker repository. It points to another image stream tag, which might not be the latest version of an image. For example, if the latest image stream tag points to v3.10 of an image, when the 3.11 version is released, the latest tag is not automatically updated to v3.11 , and remains at v3.10 until it is manually updated to point to a v3.11 image stream tag. Note Tracking tags are limited to a single image stream and cannot reference other image streams. You can create your own image stream tags for your own needs. The image stream tag is composed of the name of the image stream and a tag, separated by a colon: For example, to refer to the sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d image in the ImageStream object example earlier, the image stream tag would be: 6.5. Image stream change triggers Image stream triggers allow your builds and deployments to be automatically invoked when a new version of an upstream image is available. For example, builds and deployments can be automatically started when an image stream tag is modified. This is achieved by monitoring that particular image stream tag and notifying the build or deployment when a change is detected. 6.6. Image stream mapping When the integrated registry receives a new image, it creates and sends an image stream mapping to OpenShift Container Platform, providing the image's project, name, tag, and image metadata. Note Configuring image stream mappings is an advanced feature. This information is used to create a new image, if it does not already exist, and to tag the image into the image stream. OpenShift Container Platform stores complete metadata about each image, such as commands, entry point, and environment variables. Images in OpenShift Container Platform are immutable and the maximum name length is 63 characters. The following image stream mapping example results in an image being tagged as test/origin-ruby-sample:latest : Image stream mapping object definition apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: "1001" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: "1001" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: "1.0" kind: DockerImage dockerImageMetadataVersion: "1.0" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 6.7. Working with image streams The following sections describe how to use image streams and image stream tags. 6.7.1. Getting information about image streams You can get general information about the image stream and detailed information about all the tags it is pointing to. Procedure Get general information about the image stream and detailed information about all the tags it is pointing to: USD oc describe is/<image-name> For example: USD oc describe is/python Example output Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago Get all the information available about particular image stream tag: USD oc describe istag/<image-stream>:<tag-name> For example: USD oc describe istag/python:latest Example output Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801 Note More information is output than shown. 6.7.2. Adding tags to an image stream You can add additional tags to image streams. Procedure Add a tag that points to one of the existing tags by using the `oc tag`command: USD oc tag <image-name:tag1> <image-name:tag2> For example: USD oc tag python:3.5 python:latest Example output Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25. Confirm the image stream has two tags, one, 3.5 , pointing at the external container image and another tag, latest , pointing to the same image because it was created based on the first tag. USD oc describe is/python Example output Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago 6.7.3. Adding tags for an external image You can add tags for external images. Procedure Add tags pointing to internal or external images, by using the oc tag command for all tag-related operations: USD oc tag <repository/image> <image-name:tag> For example, this command maps the docker.io/python:3.6.0 image to the 3.6 tag in the python image stream. USD oc tag docker.io/python:3.6.0 python:3.6 Example output Tag python:3.6 set to docker.io/python:3.6.0. If the external image is secured, you must create a secret with credentials for accessing that registry. 6.7.4. Updating image stream tags You can update a tag to reflect another tag in an image stream. Procedure Update a tag: USD oc tag <image-name:tag> <image-name:latest> For example, the following updates the latest tag to reflect the 3.6 tag in an image stream: USD oc tag python:3.6 python:latest Example output Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f. 6.7.5. Removing image stream tags You can remove old tags from an image stream. Procedure Remove old tags from an image stream: USD oc tag -d <image-name:tag> For example: USD oc tag -d python:3.6 Example output Deleted tag default/python:3.6 See Removing deprecated image stream tags from the Cluster Samples Operator for more information on how the Cluster Samples Operator handles deprecated image stream tags. 6.7.6. Configuring periodic importing of image stream tags When working with an external container image registry, to periodically re-import an image, for example to get latest security updates, you can use the --scheduled flag. Procedure Schedule importing images: USD oc tag <repository/image> <image-name:tag> --scheduled For example: USD oc tag docker.io/python:3.6.0 python:3.6 --scheduled Example output Tag python:3.6 set to import docker.io/python:3.6.0 periodically. This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Remove the periodic check, re-run above command but omit the --scheduled flag. This will reset its behavior to default. USD oc tag <repositiory/image> <image-name:tag> 6.8. Importing images and image streams from private registries An image stream can be configured to import tag and image metadata from private image registries requiring authentication. This procedures applies if you change the registry that the Cluster Samples Operator uses to pull content from to something other than registry.redhat.io . Note When importing from insecure or secure registries, the registry URL defined in the secret must include the :80 port suffix or the secret is not used when attempting to import from the registry. Procedure You must create a secret object that is used to store your credentials by entering the following command: USD oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson After the secret is configured, create the new image stream or enter the oc import-image command: USD oc import-image <imagestreamtag> --from=<image> --confirm During the import process, OpenShift Container Platform picks up the secrets and provides them to the remote party. 6.8.1. Allowing pods to reference images from other secured registries To pull a secured container from other private or secured registries, you must create a pull secret from your container client credentials, such as Docker or Podman, and add it to your service account. Both Docker and Podman use a configuration file to store authentication details to log in to secured or insecure registry: Docker : By default, Docker uses USDHOME/.docker/config.json . Podman : By default, Podman uses USDHOME/.config/containers/auth.json . These files store your authentication information if you have previously logged in to a secured or insecure registry. Note Both Docker and Podman credential files and the associated pull secret can contain multiple references to the same registry if they have unique paths, for example, quay.io and quay.io/<example_repository> . However, neither Docker nor Podman support multiple entries for the exact same registry path. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque Procedure Create a secret from an existing authentication file: For Docker clients using .docker/config.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson For Podman clients using .config/containers/auth.json , enter the following command: USD oc create secret generic <pull_secret_name> \ --from-file=<path/to/.config/containers/auth.json> \ --type=kubernetes.io/podmanconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull 6.9. Importing a manifest list through ImageStreamImport You can use the ImageStreamImport resource to find and import image manifests from other container image registries into the cluster. Individual images or an entire image repository can be imported. Use the following procedure to import a manifest list through the ImageStreamImport object with the importMode value. Procedure Create an ImageStreamImport YAML file and set the importMode parameter to PreserveOriginal on the tags that you will import as a manifest list: apiVersion: image.openshift.io/v1 kind: ImageStreamImport metadata: name: app namespace: myapp spec: import: true images: - from: kind: DockerImage name: <registry>/<user_name>/<image_name> to: name: latest referencePolicy: type: Source importPolicy: importMode: "PreserveOriginal" Create the ImageStreamImport by running the following command: USD oc create -f <your_imagestreamimport.yaml> 6.9.1. importMode configuration fields The following table describes the configuration fields available for the importMode value: Parameter Description Legacy The default value for importMode . When active, the manifest list is discarded, and a single sub-manifest is imported. The platform is chosen in the following order of priority: Tag annotations Control plane architecture Linux/AMD64 The first manifest in the list PreserveOriginal When active, the original manifest is preserved. For manifest lists, the manifest list and all of its sub-manifests are imported.
[ "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5", "<image-stream-name>@<image-id>", "origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: my-image-stream tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest", "<imagestream name>:<tag>", "origin-ruby-sample:latest", "apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d", "oc describe is/<image-name>", "oc describe is/python", "Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago", "oc describe istag/<image-stream>:<tag-name>", "oc describe istag/python:latest", "Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801", "oc tag <image-name:tag1> <image-name:tag2>", "oc tag python:3.5 python:latest", "Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.", "oc describe is/python", "Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago", "oc tag <repository/image> <image-name:tag>", "oc tag docker.io/python:3.6.0 python:3.6", "Tag python:3.6 set to docker.io/python:3.6.0.", "oc tag <image-name:tag> <image-name:latest>", "oc tag python:3.6 python:latest", "Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.", "oc tag -d <image-name:tag>", "oc tag -d python:3.6", "Deleted tag default/python:3.6", "oc tag <repository/image> <image-name:tag> --scheduled", "oc tag docker.io/python:3.6.0 python:3.6 --scheduled", "Tag python:3.6 set to import docker.io/python:3.6.0 periodically.", "oc tag <repositiory/image> <image-name:tag>", "oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson", "oc import-image <imagestreamtag> --from=<image> --confirm", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret generic <pull_secret_name> --from-file=<path/to/.config/containers/auth.json> --type=kubernetes.io/podmanconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "apiVersion: image.openshift.io/v1 kind: ImageStreamImport metadata: name: app namespace: myapp spec: import: true images: - from: kind: DockerImage name: <registry>/<user_name>/<image_name> to: name: latest referencePolicy: type: Source importPolicy: importMode: \"PreserveOriginal\"", "oc create -f <your_imagestreamimport.yaml>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/images/managing-image-streams
Chapter 48. Storage
Chapter 48. Storage Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq. The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. Although blk-mq is intended to offer improved performance, particularly for low-latency devices, it is not guaranteed to always provide better performance. In particular, in some cases, enabling scsi-mq can result in significantly worse performance, especially on systems with many CPUs. (BZ#1109348) Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. (BZ#1119909) SCSI-MQ as a Technology Preview in the qla2xxx and lpfc drivers The qla2xxx driver updated in Red Hat Enterprise Linux 7.4 can enable the use of SCSI-MQ (multiqueue) with the ql2xmqsupport=1 module parameter. The default value is 0 (disabled). The SCSI-MQ functionality is provided as a Technology Preview when used with the qla2xxx or the lpfc drivers. Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions. (BZ#1414957) NVMe/FC available as a Technology Preview in Qlogic adapters using the qla2xxx driver The NVMe over Fibre Channel (NVMe/FC) transport type is available as a Technology Preview in Qlogic adapters using the qla2xxx driver. NVMe/FC is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. NVMe/FC provides a higher-performance, lower-latency I/O protocol over existing Fibre Channel infrastructure. This is especially important with solid-state storage arrays, because it allows the performance benefits of NVMe storage to be passed through the fabric transport, rather than being encapsulated in a different protocol, SCSI. Note that since Red Hat Enterprise Linux 7.6, NVMe/FC is fully supported with Broadcom Emulex Fibre Channel 32Gbit adapters using the lpfc driver. See the restrictions listed in the New Features part. (BZ# 1387768 , BZ#1454386)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_storage
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports_with_fedramp/proc-providing-feedback-on-redhat-documentation
B.17.2.4. Typographical errors in tags
B.17.2.4. Typographical errors in tags Symptom The following error message appears: Investigation XML errors are easily caused by a simple typographical error. This error message highlights the XML error - in this case, an extra white space within the word type - with a pointer. These XML examples will not parse correctly because of typographical errors such as a missing special character, or an additional character: Solution To identify the problematic tag, read the error message for the context of the file, and locate the error with the pointer. Correct the XML and save the changes.
[ "error: (name_of_guest.xml):1: Specification mandate value for attribute ty <domain ty pe='kvm'> -----------^", "<domain ty pe='kvm'>", "<domain type 'kvm'>", "<dom#ain type='kvm'>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sec-app_xml_errors-typos_in_tags
6.3. Resizing a btrfs File System
6.3. Resizing a btrfs File System It is not possible to resize a btrfs file system but it is possible to resize each of the devices it uses. If there is only one device in use then this works the same as resizing the file system. If there are multiple devices in use then they must be manually resized to achieve the desired result. Note The unit size is not case specific; it accepts both G or g for GiB. The command does not accept t for terabytes or p for petabytes. It only accepts k , m , and g . Enlarging a btrfs File System To enlarge the file system on a single device, use the command: For example: To enlarge a multi-device file system, the device to be enlarged must be specified. First, show all devices that have a btrfs file system at a specified mount point: For example: Then, after identifying the devid of the device to be enlarged, use the following command: For example: Note The amount can also be max instead of a specified amount. This will use all remaining free space on the device. Shrinking a btrfs File System To shrink the file system on a single device, use the command: For example: To shrink a multi-device file system, the device to be shrunk must be specified. First, show all devices that have a btrfs file system at a specified mount point: For example: Then, after identifying the devid of the device to be shrunk, use the following command: For example: Set the File System Size To set the file system to a specific size on a single device, use the command: For example: To set the file system size of a multi-device file system, the device to be changed must be specified. First, show all devices that have a btrfs file system at the specified mount point: For example: Then, after identifying the devid of the device to be changed, use the following command: For example:
[ "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize +200M /btrfssingle Resize '/btrfssingle' of '+200M'", "btrfs filesystem show /mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:+200M /btrfstest Resize '/btrfstest/' of '2:+200M'", "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize -200M /btrfssingle Resize '/btrfssingle' of '-200M'", "btrfs filesystem show /mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 524.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:-200M /btrfstest Resize '/btrfstest' of '2:-200M'", "btrfs filesystem resize amount / mount-point", "btrfs filesystem resize 700M /btrfssingle Resize '/btrfssingle' of '700M'", "btrfs filesystem show / mount-point", "btrfs filesystem show /btrfstest Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39 Total devices 4 FS bytes used 192.00KiB devid 1 size 1.00GiB used 224.75MiB path /dev/vdc devid 2 size 724.00MiB used 204.75MiB path /dev/vdd devid 3 size 1.00GiB used 8.00MiB path /dev/vde devid 4 size 1.00GiB used 8.00MiB path /dev/vdf Btrfs v3.16.2", "btrfs filesystem resize devid : amount /mount-point", "btrfs filesystem resize 2:300M /btrfstest Resize '/btrfstest' of '2:300M'" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/resizing-btrfs
20.4. Configuration Examples
20.4. Configuration Examples 20.4.1. MariaDB Changing Database Location When using Red Hat Enterprise Linux, the default location for MariaDB to store its database is /var/lib/mysql/ . This is where SELinux expects it to be by default, and hence this area is already labeled appropriately for you, using the mysqld_db_t type. The location where the database is stored can be changed depending on individual environment requirements or preferences, however it is important that SELinux is aware of this new location; that it is labeled accordingly. This example explains how to change the location of a MariaDB database and then how to label the new location so that SELinux can still provide its protection mechanisms to the new area based on its contents. Note that this is an example only and demonstrates how SELinux can affect MariaDB. Comprehensive documentation of MariaDB is beyond the scope of this document. See the official MariaDB documentation for further details. This example assumes that the mariadb-server and setroubleshoot-server packages are installed, that the auditd service is running, and that there is a valid database in the default location of /var/lib/mysql/ . View the SELinux context of the default database location for mysql : This shows mysqld_db_t which is the default context element for the location of database files. This context will have to be manually applied to the new database location that will be used in this example in order for it to function properly. Enter the following command and enter the mysqld root password to show the available databases: Stop the mariadb.service service: Create a new directory for the new location of the database(s). In this example, /mysql/ is used: Copy the database files from the old location to the new location: Change the ownership of this location to allow access by the mysql user and group. This sets the traditional Unix permissions which SELinux will still observe: Enter the following command to see the initial context of the new directory: The context usr_t of this newly created directory is not currently suitable to SELinux as a location for MariaDB database files. Once the context has been changed, MariaDB will be able to function properly in this area. Open the main MariaDB configuration file /etc/my.cnf with a text editor and modify the datadir option so that it refers to the new location. In this example, the value that should be entered is /mysql : Save this file and exit. Start mariadb.service . The service should fail to start, and a denial message will be logged to the /var/log/messages file: However, if the audit daemon is running alongside the setroubleshoot service, the denial will be logged to the /var/log/audit/audit.log file instead: The reason for this denial is that /mysql/ is not labeled correctly for MariaDB data files. SELinux is stopping MariaDB from having access to the content labeled as usr_t . Perform the following steps to resolve this problem: Enter the following command to add a context mapping for /mysql/ . Note that the semanage utility is not installed by default. If it is missing on your system, install the policycoreutils-python package. This mapping is written to the /etc/selinux/targeted/contexts/files/file_contexts.local file: Now use the restorecon utility to apply this context mapping to the running system: Now that the /mysql/ location has been labeled with the correct context for MariaDB, mysqld starts: Confirm the context has changed for /mysql/ : The location has been changed and labeled, and mysqld has started successfully. At this point all running services should be tested to confirm normal operation.
[ "~]# ls -lZ /var/lib/mysql drwx------. mysql mysql system_u:object_r: mysqld_db_t :s0 mysql", "~]# mysqlshow -u root -p Enter password: ******* +--------------------+ | Databases | +--------------------+ | information_schema | | mysql | | test | | wikidb | +--------------------+", "~]# systemctl stop mariadb.service", "~]# mkdir -p /mysql", "~]# cp -R /var/lib/mysql/* /mysql/", "~]# chown -R mysql:mysql /mysql", "~]# ls -lZ /mysql drwxr-xr-x. mysql mysql unconfined_u:object_r: usr_t :s0 mysql", "[mysqld] datadir=/mysql", "~]# systemctl start mariadb.service Job for mariadb.service failed. See 'systemctl status mariadb.service' and 'journalctl -xn' for details.", "SELinux is preventing /usr/libexec/mysqld \"write\" access on /mysql. For complete SELinux messages. run sealert -l b3f01aff-7fa6-4ebe-ad46-abaef6f8ad71", "~]# semanage fcontext -a -t mysqld_db_t \"/mysql(/.*)?\"", "~]# grep -i mysql /etc/selinux/targeted/contexts/files/file_contexts.local /mysql(/.*)? system_u:object_r:mysqld_db_t:s0", "~]# restorecon -R -v /mysql", "~]# systemctl start mariadb.service", "~]USD ls -lZ /mysql drwxr-xr-x. mysql mysql system_u:object_r: mysqld_db_t :s0 mysql" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Managing_Confined_Services-MariaDB-Configuration_Examples
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.9 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migrating_from_version_3_to_4/index
7.11. Creating a Cloned Virtual Machine Based on a Template
7.11. Creating a Cloned Virtual Machine Based on a Template Cloned virtual machines are based on templates and inherit the settings of the template. A cloned virtual machine does not depend on the template on which it was based after it has been created. This means the template can be deleted if no other dependencies exist. Note If you clone a virtual machine from a template, the name of the template on which that virtual machine was based is displayed in the General tab of the Edit Virtual Machine window for that virtual machine. If you change the name of that template, the name of the template in the General tab will also be updated. However, if you delete the template from the Manager, the original name of that template will be displayed instead. Cloning a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Based on Template drop-down menu. Enter a Name , Description and any Comments . You can accept the default values inherited from the template in the rest of the fields, or change them if required. Click the Resource Allocation tab. Select the Clone radio button in the Storage Allocation area. Select the disk format from the Format drop-down list. This affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires. QCOW2 (Default) Faster clone operation Optimized use of storage capacity Disk space allocated only as required Raw Slower clone operation Optimized virtual machine read and write operations All disk space requested in the template is allocated at the time of the clone operation Use the Target drop-down menu to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . Note Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked , then Down . The virtual machine is created and displayed in the Virtual Machines tab. You can now assign users to it, and can begin using it when the clone operation is complete.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/creating_a_cloned_virtual_machine_based_on_a_template
Chapter 91. Netty HTTP
Chapter 91. Netty HTTP Since Camel 2.14 Both producer and consumer are supported The Netty HTTP component is an extension to Netty component to facilitiate HTTP transport with Netty. NOTE Stream Netty is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be re-read multiple times. Notice Netty HTTP reads the entire stream into memory using io.netty.handler.codec.http.HttpObjectAggregator to build the entire full http message. But the resulting message is still a stream based message which is readable once. 91.1. Dependencies When using netty-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-http-starter</artifactId> </dependency> 91.2. URI format The URI scheme for a netty component is as follows NOTE Query parameters vs endpoint options You may be wondering how Camel recognizes URI query parameters and endpoint options. For example you might create endpoint URI as follows: netty-http:http//example.com?myParam=myValue&compression=true . In this example myParam is the HTTP parameter, while compression is the Camel endpoint option. The strategy used by Camel in such situations is to resolve available endpoint options and remove them from the URI. It means that for the discussed example, the HTTP request sent by Netty HTTP producer to the endpoint will look as follows: http//example.com?myParam=myValue , because compression endpoint option will be resolved and removed from the target URL. Keep also in mind that you cannot specify endpoint options using dynamic headers (like CamelHttpQuery ). Endpoint options can be specified only at the endpoint URI definition level (like to or from DSL elements). Important This component inherits all the options from Netty . Notice that some options from Netty are not applicable when using this Netty HTTP component, such as options related to UDP transport. 91.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 91.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 91.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 91.4. Component Options The Netty HTTP component supports 80 options, which are listed below. Name Description Default Type configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean executorService (consumer (advanced)) To use the given EventExecutorGroup. EventExecutorGroup maximumPoolSize (consumer (advanced)) Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolBlockWhenExhausted (producer (advanced)) Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxWait (producer (advanced)) Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 long producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean unixDomainSocketPath (advanced) Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurityConfiguration securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 91.5. Endpoint Options The Netty HTTP endpoint is configured using URI syntax: with the following path and query parameters: 91.5.1. Path Parameters (4 parameters) Name Description Default Type protocol (common) Required The protocol to use which is either http, https or proxy - a consumer only option. Enum values: http https String host (common) Required The local hostname such as localhost, or 0.0.0.0 when being a consumer. The remote HTTP server hostname when using producer. String port (common) The host port number. int path (common) Resource path. String 91.5.2. Query Parameters (85 parameters) Name Description Default Type bridgeEndpoint (common) If the option is true, the producer will ignore the NettyHttpConstants.HTTP_URI header, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the producer send all the fault response back. The consumer working in the bridge mode will skip the gzip compression and WWW URL form encoding (by adding the Exchange.SKIP_GZIP_ENCODING and Exchange.SKIP_WWW_FORM_URLENCODED headers to the consumed exchange). false boolean disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean matchOnUriPrefix (consumer) Whether or not Camel should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean send503whenSuspended (consumer) Whether to send back HTTP status code 503 when the consumer has been suspended. If the option is false then the Netty Acceptor is unbound when the consumer is suspended, so clients cannot connect anymore. true boolean backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean chunkedMaxContentLength (consumer (advanced)) Value in bytes the max content length per chunked frame received on the Netty HTTP server. 1048576 int compression (consumer (advanced)) Allow using gzip/deflate for compression on the Netty HTTP server if the client supports it from the HTTP headers. false boolean disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern httpMethodRestrict (consumer (advanced)) To disable HTTP methods on the Netty HTTP consumer. You can specify multiple separated by comma. String logWarnOnBadRequest (consumer (advanced)) Whether Netty HTTP server should log a WARN if decoding the HTTP request failed and a HTTP Status 400 (bad request) is returned. true boolean mapHeaders (consumer (advanced)) If this option is enabled, then during binding from Netty to Camel Message then the headers will be mapped as well (eg added as header to the Camel Message as well). You can turn off this option to disable this. The headers can still be accessed from the org.apache.camel.component.netty.http.NettyHttpMessage message with the method getHttpRequest() that returns the Netty HTTP request io.netty.handler.codec.http.HttpRequest instance. true boolean maxChunkSize (consumer (advanced)) The maximum length of the content or each chunk. If the content length (or the length of each chunk) exceeds this value, the content or chunk will be split into multiple io.netty.handler.codec.http.HttpContents whose length is maxChunkSize at maximum. See io.netty.handler.codec.http.HttpObjectDecoder. 8192 int maxHeaderSize (consumer (advanced)) The maximum length of all headers. If the sum of the length of each header exceeds this value, a io.netty.handler.codec.TooLongFrameException will be raised. 8192 int maxInitialLineLength (consumer (advanced)) The maximum length of the initial line (e.g. \\{code GET / HTTP/1.0} or \\{code HTTP/1.0 200 OK}) If the length of the initial line exceeds this value, a TooLongFrameException will be raised. See io.netty.handler.codec.http.HttpObjectDecoder. 4096 int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory nettySharedHttpServer (consumer (advanced)) To use a shared Netty HTTP server. See Netty HTTP Server Example for more details. NettySharedHttpServer noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory traceEnabled (consumer (advanced)) Specifies whether to enable HTTP TRACE for this Netty HTTP consumer. By default TRACE is turned off. false boolean urlDecodeHeaders (consumer (advanced)) If this option is enabled, then during binding from Netty to Camel Message then the header values will be URL decoded (eg %20 will be a space character. Notice this option is used by the default org.apache.camel.component.netty.http.NettyHttpBinding and therefore if you implement a custom org.apache.camel.component.netty.http.NettyHttpBinding then you would need to decode the headers accordingly to this option. false boolean usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean okStatusCodeRange (producer (advanced)) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. The default range is 200-299. 200-299 String producerPoolBlockWhenExhausted (producer (advanced)) Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxWait (producer (advanced)) Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 long producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int useRelativePath (producer (advanced)) Sets whether to use a relative path in HTTP requests. true boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup configuration (advanced) To use a custom configured NettyHttpConfiguration for configuring this endpoint. NettyHttpConfiguration disableStreamCache (advanced) Determines whether or not the raw input stream from Netty HttpRequest#getContent() or HttpResponset#getContent() is cached or not (Camel will read the stream into a in light-weight memory based Stream caching) cache. By default Camel will cache the Netty input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. Mind that if you enable this option, then you cannot read the Netty stream multiple times out of the box, and you would need manually to reset the reader index on the Netty raw stream. Also Netty will auto-close the Netty stream when the Netty HTTP server/HTTP client is done processing, which means that if the asynchronous routing engine is in use then any asynchronous thread that may continue routing the org.apache.camel.Exchange may not be able to read the Netty stream, because Netty has closed it. false boolean headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean unixDomainSocketPath (advanced) Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String enabledProtocols (security) Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurityConfiguration securityOptions (security) To configure NettyHttpSecurityConfiguration using key/value pairs from the map. Map securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 91.6. Message Headers The Netty HTTP component supports 23 message header(s), which is/are listed below: Name Description Default Type CamelHttpAuthentication (common) Constant: HTTP_AUTHENTICATION If the user was authenticated using HTTP Basic then this header is added with the value Basic. String Content-Type (common) Constant: CONTENT_TYPE To set the content-type of the HTTP body. For example: text/plain; charset=UTF-8. String connection (common) Constant: CONNECTION The value of the HTTP header connection to use. String CamelNettyCloseChannelWhenComplete (common) Constant: NETTY_CLOSE_CHANNEL_WHEN_COMPLETE Indicates whether the channel should be closed after complete. Boolean CamelHttpResponseCode (common) Constant: HTTP_RESPONSE_CODE Allows to set the HTTP Status code to use. By default 200 is used for success, and 500 for failure. Integer CamelHttpProtocolVersion (common) Constant: HTTP_PROTOCOL_VERSION The version of the HTTP protocol. HTTP/1.1 String CamelHttpMethod (common) Constant: HTTP_METHOD The HTTP method used, such as GET, POST, TRACE etc. GET String CamelHttpQuery (common) Constant: HTTP_QUERY Any query parameters, such as foo=bar&beer=yes. String CamelHttpPath (common) Constant: HTTP_PATH Allows to provide URI context-path and query parameters as a String value that overrides the endpoint configuration. This allows to reuse the same producer for calling same remote http server, but using a dynamic context-path and query parameters. String CamelHttpRawQuery (common) Constant: HTTP_RAW_QUERY Any query parameters, such as foo=bar&beer=yes. Stored in the raw form, as they arrived to the consumer (i.e. before URL decoding). String CamelHttpUrl (common) Constant: HTTP_URL The URL including protocol, host and port, etc:http://0.0.0.0:8080/myapp . String CamelHttpCharacterEncoding (common) Constant: HTTP_CHARACTER_ENCODING The charset from the content-type header. String CamelHttpUri (common) Constant: HTTP_URI The URI without protocol, host and port, etc: /myapp. String CamelNettyChannelHandlerContext (common) Constant: NETTY_CHANNEL_HANDLER_CONTEXT The channel handler context. ChannelHandlerContext CamelNettyRemoteAddress (common) Constant: NETTY_REMOTE_ADDRESS The remote address. SocketAddress CamelNettyLocalAddress (common) Constant: NETTY_LOCAL_ADDRESS The local address. SocketAddress CamelNettySSLSession (common) Constant: NETTY_SSL_SESSION The SSL session. SSLSession CamelNettySSLClientCertSubjectName (common) Constant: NETTY_SSL_CLIENT_CERT_SUBJECT_NAME The SSL client certificate subject name. String CamelNettySSLClientCertIssuerName (common) Constant: NETTY_SSL_CLIENT_CERT_ISSUER_NAME The SSL client certificate issuer name. String CamelNettySSLClientCertSerialNumber (common) Constant: NETTY_SSL_CLIENT_CERT_SERIAL_NO The SSL client certificate serial number. String CamelNettySSLClientCertNotBefore (common) Constant: NETTY_SSL_CLIENT_CERT_NOT_BEFORE The SSL client certificate not before. Date CamelNettySSLClientCertNotAfter (common) Constant: NETTY_SSL_CLIENT_CERT_NOT_AFTER The SSL client certificate not after. Date CamelNettyRequestTimeout (common) Constant: NETTY_REQUEST_TIMEOUT The read timeout. Long 91.7. Access to Netty types This component uses the org.apache.camel.component.netty.http.NettyHttpMessage as the message implementation on the Exchange. This allows end users to get access to the original Netty request/response instances if needed, as shown below. Mind that the original response may not be accessible at all times. io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest(); 91.8. Examples In the route below we use Netty HTTP as a HTTP server, which returns back a hardcoded "Bye World" message. from("netty-http:http://0.0.0.0:8080/foo") .transform().constant("Bye World"); And we can call this HTTP server using Camel also, with the ProducerTemplate as shown below: String out = template.requestBody("netty-http:http://0.0.0.0:8080/foo", "Hello World", String.class); System.out.println(out); And we get back "Bye World" as the output. 91.8.1. How do I let Netty match wildcards By default Netty HTTP will only match on exact uri's. But you can instruct Netty to match prefixes. For example from("netty-http:http://0.0.0.0:8123/foo").to("mock:foo"); In the route above Netty HTTP will only match if the uri is an exact match, so it will match if you enter http://0.0.0.0:8123/foo but not match if you do http://0.0.0.0:8123/foo/bar . So if you want to enable wildcard matching you do as follows: from("netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true").to("mock:foo"); So now Netty matches any endpoints with starts with foo . To match any endpoint you can do: from("netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true").to("mock:foo"); 91.8.2. Using multiple routes with same port In the same CamelContext you can have multiple routes from Netty HTTP that shares the same port (eg a io.netty.bootstrap.ServerBootstrap instance). Doing this requires a number of bootstrap options to be identical in the routes, as the routes will share the same io.netty.bootstrap.ServerBootstrap instance. The instance will be configured with the options from the first route created. The options the routes must be identical configured is all the options defined in the org.apache.camel.component.netty.NettyServerBootstrapConfiguration configuration class. If you have configured another route with different options, Camel will throw an exception on startup, indicating the options is not identical. To mitigate this ensure all options is identical. Here is an example with two routes that share the same port. Two routes sharing the same port from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); from("netty-http:http://0.0.0.0:{{port}}/bar") .to("mock:bar") .transform().constant("Bye Camel"); And here is an example of a misconfigured 2nd route that do not have identical org.apache.camel.component.netty.NettyServerBootstrapConfiguration option as the 1st route. This will cause Camel to fail on startup. Two routes sharing the same port, but the 2nd route is misconfigured and will fail on starting from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from("netty-http:http://0.0.0.0:{{port}}/bar?ssl=true") .to("mock:bar") .transform().constant("Bye Camel"); 91.8.3. Reusing same server bootstrap configuration with multiple routes By configuring the common server bootstrap option in an single instance of a org.apache.camel.component.netty.NettyServerBootstrapConfiguration type, we can use the bootstrapConfiguration option on the Netty HTTP consumers to refer and reuse the same options across all consumers. <bean id="nettyHttpBootstrapOptions" class="org.apache.camel.component.netty.NettyServerBootstrapConfiguration"> <property name="backlog" value="200"/> <property name="connectionTimeout" value="20000"/> <property name="workerCount" value="16"/> </bean> And in the routes you refer to this option as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> 91.8.4. Reusing same server bootstrap configuration with multiple routes across multiple bundles in OSGi container See the above Netty HTTP Server Example for more details and example how to do that. 91.8.5. Implementing a reverse proxy Netty HTTP component can act as a reverse proxy, in that case Exchange.HTTP_SCHEME , Exchange.HTTP_HOST and Exchange.HTTP_PORT headers are populated from the absolute URL received on the request line of the HTTP request. Here's an example of a HTTP proxy that simply transforms the response from the origin server to uppercase. from("netty-http:proxy://0.0.0.0:8080") .toD("netty-http:" + "USD{headers." + Exchange.HTTP_SCHEME + "}://" + "USD{headers." + Exchange.HTTP_HOST + "}:" + "USD{headers." + Exchange.HTTP_PORT + "}") .process(this::processResponse); void processResponse(final Exchange exchange) { final NettyHttpMessage message = exchange.getIn(NettyHttpMessage.class); final FullHttpResponse response = message.getHttpResponse(); final ByteBuf buf = response.content(); final String string = buf.toString(StandardCharsets.UTF_8); buf.resetWriterIndex(); ByteBufUtil.writeUtf8(buf, string.toUpperCase(Locale.US)); } 91.9. Using HTTP Basic Authentication The Netty HTTP consumer supports HTTP basic authentication by specifying the security realm name to use, as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf"/> ... </route> The realm name is mandatory to enable basic authentication. By default the JAAS based authenticator is used, which will use the realm name specified (karaf in the example above) and use the JAAS realm and the JAAS \\{{LoginModule}}s of this realm for authentication. End user of Apache Karaf / ServiceMix has a karaf realm out of the box, and hence why the example above would work out of the box in these containers. 91.9.1. Specifying ACL on web resources The org.apache.camel.component.netty.http.SecurityConstraint allows to define constrains on web resources. The org.apache.camel.component.netty.http.SecurityConstraintMapping is provided out of the box, allowing to easily define inclusions and exclusions with roles. For example as shown below in the XML DSL, we define the constraint bean: <bean id="constraint" class="org.apache.camel.component.netty.http.SecurityConstraintMapping"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name="inclusions"> <map> <entry key="/*" value="*"/> <entry key="/admin/*" value="admin"/> <entry key="/guest/*" value="admin,guest"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name="exclusions"> <set> <value>/public/*</value> </set> </property> </bean> The constraint above is define so that access to /* is restricted and any roles is accepted (also if user has no roles) access to /admin/* requires the admin role access to /guest/* requires the admin or guest role access to /public/* is an exclusion which means no authentication is needed, and is therefore public for everyone without logging in To use this constraint we just need to refer to the bean id as shown below: <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&amp;securityConfiguration.realm=karaf&amp;securityConfiguration.securityConstraint=#constraint"/> ... </route> 91.10. Spring Boot Auto-Configuration The component supports 67 options, which are listed below. Name Description Default Type camel.component.netty-http.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.netty-http.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty-http.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 Integer camel.component.netty-http.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup camel.component.netty-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.netty-http.channel-group To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. ChannelGroup camel.component.netty-http.client-initializer-factory To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. ClientInitializerFactory camel.component.netty-http.configuration To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. NettyConfiguration camel.component.netty-http.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty-http.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. NettyCamelStateCorrelationManager camel.component.netty-http.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty-http.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty-http.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty-http.enabled Whether to enable auto configuration of the netty-http component. This is enabled by default. Boolean camel.component.netty-http.enabled-protocols Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String camel.component.netty-http.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty-http.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. EventExecutorGroup camel.component.netty-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.netty-http.hostname-verification To enable/disable hostname verification on SSLEngine. false Boolean camel.component.netty-http.keep-alive Setting to ensure socket is not closed due to inactivity. true Boolean camel.component.netty-http.key-store-file Client side certificate keystore to be used for encryption. File camel.component.netty-http.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set. String camel.component.netty-http.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty-http.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty-http.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.netty-http.maximum-pool-size Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. Integer camel.component.netty-http.mute-exception If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false Boolean camel.component.netty-http.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false Boolean camel.component.netty-http.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty-http.netty-http-binding To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. The option is a org.apache.camel.component.netty.http.NettyHttpBinding type. NettyHttpBinding camel.component.netty-http.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. NettyServerBootstrapFactory camel.component.netty-http.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty-http.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty-http.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH. String camel.component.netty-http.producer-pool-block-when-exhausted Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true Boolean camel.component.netty-http.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty-http.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty-http.producer-pool-max-total Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty-http.producer-pool-max-wait Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 Long camel.component.netty-http.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty-http.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty-http.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty-http.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty-http.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty-http.reuse-address Setting to facilitate socket multiplexing. true Boolean camel.component.netty-http.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty-http.security-configuration Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. The option is a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration type. NettyHttpSecurityConfiguration camel.component.netty-http.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty-http.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty-http.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty-http.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty-http.server-initializer-factory To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. ServerInitializerFactory camel.component.netty-http.ssl Setting to specify whether SSL encryption is applied to this endpoint. false Boolean camel.component.netty-http.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty-http.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.netty-http.ssl-handler Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. SslHandler camel.component.netty-http.sync Setting to set endpoint as one-way or request-response. true Boolean camel.component.netty-http.tcp-no-delay Setting to improve TCP protocol performance. true Boolean camel.component.netty-http.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty-http.trust-store-file Server side certificate keystore to be used for encryption. File camel.component.netty-http.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty-http.unix-domain-socket-path Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String camel.component.netty-http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty-http.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty-http.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. Integer camel.component.netty-http.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-http-starter</artifactId> </dependency>", "netty-http:http://0.0.0.0:8080[?options]", "netty-http:protocol://host:port/path", "io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest();", "from(\"netty-http:http://0.0.0.0:8080/foo\") .transform().constant(\"Bye World\");", "String out = template.requestBody(\"netty-http:http://0.0.0.0:8080/foo\", \"Hello World\", String.class); System.out.println(out);", "from(\"netty-http:http://0.0.0.0:8123/foo\").to(\"mock:foo\");", "from(\"netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true\").to(\"mock:foo\");", "from(\"netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true\").to(\"mock:foo\");", "from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); from(\"netty-http:http://0.0.0.0:{{port}}/bar\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");", "from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from(\"netty-http:http://0.0.0.0:{{port}}/bar?ssl=true\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");", "<bean id=\"nettyHttpBootstrapOptions\" class=\"org.apache.camel.component.netty.NettyServerBootstrapConfiguration\"> <property name=\"backlog\" value=\"200\"/> <property name=\"connectionTimeout\" value=\"20000\"/> <property name=\"workerCount\" value=\"16\"/> </bean>", "<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route>", "from(\"netty-http:proxy://0.0.0.0:8080\") .toD(\"netty-http:\" + \"USD{headers.\" + Exchange.HTTP_SCHEME + \"}://\" + \"USD{headers.\" + Exchange.HTTP_HOST + \"}:\" + \"USD{headers.\" + Exchange.HTTP_PORT + \"}\") .process(this::processResponse); void processResponse(final Exchange exchange) { final NettyHttpMessage message = exchange.getIn(NettyHttpMessage.class); final FullHttpResponse response = message.getHttpResponse(); final ByteBuf buf = response.content(); final String string = buf.toString(StandardCharsets.UTF_8); buf.resetWriterIndex(); ByteBufUtil.writeUtf8(buf, string.toUpperCase(Locale.US)); }", "<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf\"/> </route>", "<bean id=\"constraint\" class=\"org.apache.camel.component.netty.http.SecurityConstraintMapping\"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name=\"inclusions\"> <map> <entry key=\"/*\" value=\"*\"/> <entry key=\"/admin/*\" value=\"admin\"/> <entry key=\"/guest/*\" value=\"admin,guest\"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name=\"exclusions\"> <set> <value>/public/*</value> </set> </property> </bean>", "<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&amp;securityConfiguration.realm=karaf&amp;securityConfiguration.securityConstraint=#constraint\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-netty-http-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/making-open-source-more-inclusive
Chapter 3. Authentication [config.openshift.io/v1]
Chapter 3. Authentication [config.openshift.io/v1] Description Authentication specifies cluster-wide settings for authentication (like OAuth and webhook token authenticators). The canonical name of an instance is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 3.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description oauthMetadata object oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. serviceAccountIssuer string serviceAccountIssuer is the identifier of the bound service account token issuer. The default is https://kubernetes.default.svc WARNING: Updating this field will not result in immediate invalidation of all bound tokens with the issuer value. Instead, the tokens issued by service account issuer will continue to be trusted for a time period chosen by the platform (currently set to 24h). This time period is subject to change over time. This allows internal components to transition to use new service account issuer without service distruption. type string type identifies the cluster managed, user facing authentication mode in use. Specifically, it manages the component that responds to login attempts. The default is IntegratedOAuth. webhookTokenAuthenticator object webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". webhookTokenAuthenticators array webhookTokenAuthenticators is DEPRECATED, setting it has no effect. webhookTokenAuthenticators[] object deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. 3.1.2. .spec.oauthMetadata Description oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.1.3. .spec.webhookTokenAuthenticator Description webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". Type object Required kubeConfig Property Type Description kubeConfig object kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. 3.1.4. .spec.webhookTokenAuthenticator.kubeConfig Description kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.5. .spec.webhookTokenAuthenticators Description webhookTokenAuthenticators is DEPRECATED, setting it has no effect. Type array 3.1.6. .spec.webhookTokenAuthenticators[] Description deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. Type object Property Type Description kubeConfig object kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. 3.1.7. .spec.webhookTokenAuthenticators[].kubeConfig Description kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.8. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description integratedOAuthMetadata object integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. 3.1.9. .status.integratedOAuthMetadata Description integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/config.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/config.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 3.2.1. /apis/config.openshift.io/v1/authentications HTTP method DELETE Description delete collection of Authentication Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body Authentication schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 3.2.2. /apis/config.openshift.io/v1/authentications/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the Authentication HTTP method DELETE Description delete an Authentication Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body Authentication schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 3.2.3. /apis/config.openshift.io/v1/authentications/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the Authentication HTTP method GET Description read status of the specified Authentication Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Authentication schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/authentication-config-openshift-io-v1
Chapter 2. New features
Chapter 2. New features This section describes new features introduced in Red Hat OpenShift Data Foundation 4.16. 2.1. Disaster recovery solution 2.1.1. User interface support for discovered applications in Disaster Recovery For discovered applications not deployed using RHACM (discovered applications), the OpenShift Data Foundation Disaster Recovery solution extends protection with a new user experience for failover and failback operations that are managed using RHACM. For more information, see Metro-DR protection for discovered applications and Regional-DR protection for discovered applications . 2.1.2. Disaster recovery solution for Applications that require Kube resource protection with labels The OpenShift Data Foundation Disaster Recovery solution supports applications that are developed or deployed using an imperative model. The cluster resources for these discovered applications are protected and restored at the secondary cluster using OpenShift APIs for Data Protection (OADP). For instructions on how to enroll discovered applications, see Enrolling discovered applications for Metro-DR and Enrolling discoverd applications for Regional-DR . 2.1.3. Expand discovered application DR support to multi-namespace Applications The OpenShift Data Foundation Disaster Recovery solution now extends protection to discovered applications that span across multiple namespaces. 2.1.4. OpenShift virtualization workloads for Regional-DR Regional disaster recovery (Regional-DR) solution can be easily set up for OpenShift Virtualization workloads using OpenShift Data Foundation. For more information, see the knowledgebase article, Use OpenShift Data Foundation Disaster Recovery to Protect Virtual Machines . 2.1.5. OpenShift virtualization in a stretch cluster Disaster recovery with stretch clusters for workloads based on OpenShift Virtualization technology using OpenShift Data Foundation can now be easily set up. For more information, see the OpenShift Virtualization in OpenShift Container Platform guide. 2.1.6. Recovering to a replacement cluster for Regional-DR When a primary or a secondary cluster of Regional-DR fails, the cluster can be either repaired or wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. OpenShift Data Foundation provides the ability to replace a failed primary or a secondary cluster with a new cluster and enable failover (relocate) to the new cluster. For more information, see Recovering to a replacement cluster . 2.1.7. Enable monitoring support for ACM Subscription application type The disaster recovery dashboard on Red Hat Advanced Cluster Management (RHACM) console is extended to display monitoring data for Subscription type applications in addition to ApplicationSet type applications. Data such as the following can be monitored: Volume replication delays Count of protected Subscription type applications with or without replication issues Number of persistent volumes with replication healthy and unhealthy Application-wise data like the following: Recovery Point Objective (RPO) Last sync time Current DR activity status (Relocating, Failing over, Deployed, Relocated, Failed Over) Application-wise persistent volume count with replication healthy and unhealthy 2.1.8. Hub recovery support for co-situated and neutral site Regional-DR deployments The Regional disaster recovery solutions of OpenShift Data Foundation now support neutral site deployments and hub recovery of co-situated managed clusters using Red Hat Advanced Cluster Management. For configuring hub recovery setup, a 4th cluster is required which acts as the passive hub. The passive hub cluster can be set up in either one of the following ways: The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). The active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Regional-DR chapter on Hub recovery using Red Hat Advanced Cluster Management . 2.2. Weekly cluster-wide encryption key rotation Security common practices require periodic encryption key rotation. OpenShift Data Foundation automatically rotates the encryption keys stored in kubernetes secret (non-KMS) on a weekly basis. For more information, see Cluster-wide encryption . 2.3. Support custom taints Custom taints can be configured using the storage cluster CR by directly adding tolerations under the placement section of the CR. This helps to simplify the process of adding custom taints. For more information, see the knowledgebase article, How to add toleration for the "non-ocs" taints to the OpenShift Data Foundation pods? 2.4. Support for SELinux mount feature with ReadWriteOncePod access mode OpenShift Data Foundation now supports SELinux mount feature with ReadWriteOncePod access mode. This feature helps to reduce the time taken to change the SELinux labels of the files and folders in a volume, especially when the volume has many files and is on a remote filesystem such as CephFS. 2.5. Support for ReadWriteOncePod access mode OpenShift Data Foundation provides ReadWriteOncePod (RWOP) access mode to ensure that only one pod across the whole cluster can read the persistent volume claim (PVC) or write to it. 2.6. Faster client IO or recovery IO during OSD backfill Client IO or recovery IO can be set to be favored during a maintenance window. Favoring recovery IO over client IO significantly reduces OSD recovery time. For more information in setting the recovery profile, see Enabling faster client IO or recovery IO during OSD backfill . 2.7. Support for generic ephemeral storage for pods OpenShift Data Foundation provides support for generic ephemeral volume. This support enables a user to specify generic ephemeral volumes in its pod specification and tie the lifecycle of the PVC with the pod. 2.8. Cross storage class clone OpenShift Data Foundation provides an ability to move from a storage class with replica 3 to replica 2 or replica 1 while cloning. This helps to reduce storage footprint. For more information, see Creating a clone . 2.9. Overprovision Level Policy Control Overprovision control mechanism enables defining a quota on the amount of persistent volume claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When this overprovision control mechanism is enabled, overprovisioning the PVCs consumed from the storage cluster is prevented. For more information, see Overprovision level policy control . 2.10. Scaling up an OpenShift Data Foundation cluster by resizing existing OSDs Scaling up an OpenShift Data Foundation cluster can be done by resizing the existing OSDs instead of adding new capacity. This enables expanding the storage without allocating additional CPU/RAM thereby helping to save on resources. For more information, see Scaling up storage capacity on a cluster by resizing existing OSDs . Note Scaling up storage capacity by resizing existing OSDs is not supported with Local Storage Operator deployment mode. Resizing OSDs is supported only for dynamic Persistent Volume Claim (PVC) based OSDs.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/4.16_release_notes/new_features
Chapter 1. Security overview
Chapter 1. Security overview Manage the security of your Red Hat Advanced Cluster Management for Kubernetes components. Govern your cluster with defined policies and processes to identify and minimize risks. Use policies to define rules and set controls. Prerequisite : You must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes. See Access control for more information. Read through the following topics to learn more about securing your cluster: Certificates introduction Governance 1.1. Certificates introduction You can use various certificates to verify authenticity for your Red Hat Advanced Cluster Management for Kubernetes cluster. Continue reading to learn about certificate management. Certificates Managing certificates 1.2. Certificates All certificates required by services that run on Red Hat Advanced Cluster Management are created during the installation of Red Hat Advanced Cluster Management. View the following list of certificates, which are created and managed by the following components of Red Hat OpenShift Container Platform: OpenShift Service Serving Certificates Red Hat Advanced Cluster Management webhook controllers Kubernetes Certificates API OpenShift default ingress Required access : Cluster administrator Continue reading to learn more about certificate management: Red Hat Advanced Cluster Management hub cluster certificates Red Hat Advanced Cluster Management managed certificates Note: Users are responsible for certificate rotations and updates. 1.2.1. Red Hat Advanced Cluster Management hub cluster certificates OpenShift default ingress certificate is technically a hub cluster certificate. After the Red Hat Advanced Cluster Management installation, observability certificates are created and used by the observability components to provide mutual TLS on the traffic between the hub cluster and managed cluster. The open-cluster-management-observability namespace contains the following certificates: observability-server-ca-certs : Has the CA certificate to sign server-side certificates observability-client-ca-certs : Has the CA certificate to sign client-side certificates observability-server-certs : Has the server certificate used by the observability-observatorium-api deployment observability-grafana-certs : Has the client certificate used by the observability-rbac-query-proxy deployment The open-cluster-management-addon-observability namespace contain the following certificates on managed clusters: observability-managed-cluster-certs : Has the same server CA certificate as observability-server-ca-certs in the hub server observability-controller-open-cluster-management.io-observability-signer-client-cert : Has the client certificate used by the metrics-collector-deployment The CA certificates are valid for five years and other certificates are valid for one year. All observability certificates are automatically refreshed upon expiration. View the following list to understand the effects when certificates are automatically renewed: Non-CA certificates are renewed automatically when the remaining valid time is no more than 73 days. After the certificate is renewed, the pods in the related deployments restart automatically to use the renewed certificates. CA certificates are renewed automatically when the remaining valid time is no more than one year. After the certificate is renewed, the old CA is not deleted but co-exist with the renewed ones. Both old and renewed certificates are used by related deployments, and continue to work. The old CA certificates are deleted when they expire. When a certificate is renewed, the traffic between the hub cluster and managed cluster is not interrupted. View the following Red Hat Advanced Cluster Management hub cluster certificates table: Table 1.1. Red Hat Advanced Cluster Management hub cluster certificates Namespace Secret name Pod label open-cluster-management channels-apps-open-cluster-management-webhook-svc-ca app=multicluster-operators-channel open-cluster-management channels-apps-open-cluster-management-webhook-svc-signed-ca app=multicluster-operators-channel open-cluster-management multicluster-operators-application-svc-ca app=multicluster-operators-application open-cluster-management multicluster-operators-application-svc-signed-ca app=multicluster-operators-application open-cluster-management-hub registration-webhook-serving-cert signer-secret Not required open-cluster-management-hub 1.2.2. Red Hat Advanced Cluster Management managed certificates View the following table for a summarized list of the component pods that contain Red Hat Advanced Cluster Management managed certificates and the related secrets: Table 1.2. Pods that contain Red Hat Advanced Cluster Management managed certificates Namespace Secret name (if applicable) open-cluster-management-agent-addon cluster-proxy-open-cluster-management.io-proxy-agent-signer-client-cert open-cluster-management-agent-addon cluster-proxy-service-proxy-server-certificates 1.2.2.1. Managed cluster certificates You can use certificates to authenticate managed clusters with the hub cluster. Therefore, it is important to be aware of troubleshooting scenarios associated with these certificates. The managed cluster certificates are refreshed automatically. 1.2.3. Additional resources Use the certificate policy controller to create and manage certificate policies on managed clusters. See Certificate policy controller for more details. See Using custom CA certificates for a secure HTTPS connection for more details about securely connecting to a privately-hosted Git server with SSL/TLS certificates. See OpenShift Service Serving Certificates for more details. The OpenShift Container Platform default ingress is a hub cluster certificate. See Replacing the default ingress certificate for more details. See Certificates introduction for topics. 1.2.4. Managing certificates Continue reading for information about how to refresh, replace, rotate, and list certificates. Refreshing a Red Hat Advanced Cluster Management webhook certificate Replacing certificates for alertmanager route Rotating the Gatekeeper webhook certificate Verifying certificate rotation Listing hub cluster managed certificates 1.2.4.1. Refreshing a Red Hat Advanced Cluster Management webhook certificate You can refresh Red Hat Advanced Cluster Management managed certificates, which are certificates that are created and managed by Red Hat Advanced Cluster Management services. Complete the following steps to refresh certificates managed by Red Hat Advanced Cluster Management: Delete the secret that is associated with the Red Hat Advanced Cluster Management managed certificate by running the following command: 1 Replace <namespace> and <secret> with the values that you want to use. Restart the services that are associated with the Red Hat Advanced Cluster Management managed certificate(s) by running the following command: 1 Replace <namespace> and <pod-label> with the values from the Red Hat Advanced Cluster Management managed cluster certificates table. Note: If a pod-label is not specified, there is no service that must be restarted. The secret is recreated and used automatically. 1.2.4.2. Replacing certificates for alertmanager route If you do not want to use the OpenShift default ingress certificate, replace observability alertmanager certificates by updating the alertmanager route. Complete the following steps: Examine the observability certificate with the following command: openssl x509 -noout -text -in ./observability.crt Change the common name ( CN ) on the certificate to alertmanager . Change the SAN in the csr.cnf configuration file with the hostname for your alertmanager route. Create the two following secrets in the open-cluster-management-observability namespace. Run the following commands: oc -n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key oc -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key 1.2.4.3. Rotating the gatekeeper webhook certificate Complete the following steps to rotate the gatekeeper webhook certificate: Edit the secret that contains the certificate with the following command: oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert Delete the following content in the data section: ca.crt , ca.key , tls.crt , and tls.key . Restart the gatekeeper webhook service by deleting the gatekeeper-controller-manager pods with the following command: The gatekeeper webhook certificate is rotated. 1.2.4.4. Verifying certificate rotation Verify that your certificates are rotated using the following steps: Identify the secret that you want to check. Check the tls.crt key to verify that a certificate is available. Display the certificate information by using the following command: oc get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout Replace <your-secret-name> with the name of secret that you are verifying. If it is necessary, also update the namespace and JSON path. Check the Validity details in the output. View the following Validity example: Validity Not Before: Jul 13 15:17:50 2023 GMT 1 Not After : Jul 12 15:17:50 2024 GMT 2 1 The Not Before value is the date and time that you rotated your certificate. 2 The Not After value is the date and time for the certificate expiration. 1.2.4.5. Listing hub cluster managed certificates You can view a list of hub cluster managed certificates that use OpenShift Service Serving Certificates service internally. Run the following command to list the certificates: for ns in multicluster-engine open-cluster-management ; do echo "USDns:" ; oc get secret -n USDns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\.beta\\.openshift\\.io/expiry | grep -v '<none>' ; echo ""; done For more information, see OpenShift Service Serving Certificates in the Additional resources section. Note: If observability is enabled, there are additional namespaces where certificates are created. 1.2.4.6. Additional resources OpenShift Service Serving Certificates Certificates introduction
[ "delete secret -n <namespace> <secret> 1", "delete pod -n <namespace> -l <pod-label> 1", "openssl x509 -noout -text -in ./observability.crt", "-n open-cluster-management-observability create secret tls alertmanager-byo-ca --cert ./ca.crt --key ./ca.key -n open-cluster-management-observability create secret tls alertmanager-byo-cert --cert ./ingress.crt --key ./ingress.key", "edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert", "delete pod -n openshift-gatekeeper-system -l control-plane=controller-manager", "get secret <your-secret-name> -n open-cluster-management -o jsonpath='{.data.tls\\.crt}' | base64 -d | openssl x509 -text -noout", "Validity Not Before: Jul 13 15:17:50 2023 GMT 1 Not After : Jul 12 15:17:50 2024 GMT 2", "for ns in multicluster-engine open-cluster-management ; do echo \"USDns:\" ; oc get secret -n USDns -o custom-columns=Name:.metadata.name,Expiration:.metadata.annotations.service\\\\.beta\\\\.openshift\\\\.io/expiry | grep -v '<none>' ; echo \"\"; done" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/governance/security
function::user_mode
function::user_mode Name function::user_mode - Determines if probe point occurs in user-mode Synopsis Arguments None Description Return 1 if the probe point occurred in user-mode.
[ "user_mode:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-mode
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/administering_red_hat_openshift_api_management/making-open-source-more-inclusive
Chapter 5. Kafka Bridge interface
Chapter 5. Kafka Bridge interface The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. It offers the advantages of a web API connection to AMQ Streams, without the need for client applications to interpret the Kafka protocol. The API has two main resources - consumers and topics - that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. 5.1. HTTP requests The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: Send messages to a topic. Retrieve messages from topics. Retrieve a list of partitions for a topic. Create and delete consumers. Subscribe consumers to topics, so that they start receiving messages from those topics. Retrieve a list of topics that a consumer is subscribed to. Unsubscribe consumers from topics. Assign partitions to consumers. Commit a list of consumer offsets. Seek on a partition, so that a consumer starts receiving messages from the first or last offset position, or a given offset position. The methods provide JSON responses and HTTP response code error handling. Messages can be sent in JSON or binary formats. Clients can produce and consume messages without the requirement to use the native Kafka protocol. Additional resources To view the API documentation, including example requests and responses, see Using the AMQ Streams Kafka Bridge . 5.2. Supported clients for the Kafka Bridge You can use the Kafka Bridge to integrate both internal and external HTTP client applications with your Kafka cluster. Internal clients Internal clients are container-based HTTP clients running in the same OpenShift cluster as the Kafka Bridge itself. Internal clients can access the Kafka Bridge on the host and port defined in the KafkaBridge custom resource. External clients External clients are HTTP clients running outside the OpenShift cluster in which the Kafka Bridge is deployed and running. External clients can access the Kafka Bridge through an OpenShift Route, a loadbalancer service, or using an Ingress. HTTP internal and external client integration
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_on_openshift_overview/overview-components-kafka-bridge_str
Chapter 3. Developer CLI (odo)
Chapter 3. Developer CLI (odo) 3.1. odo release notes 3.1.1. Notable changes and improvements in odo version 2.5.0 Creates unique routes for each component, using adler32 hashing Supports additional fields in the devfile for assigning resources: cpuRequest cpuLimit memoryRequest memoryLimit Adds the --deploy flag to the odo delete command, to remove components deployed using the odo deploy command: USD odo delete --deploy Adds mapping support to the odo link command Supports ephemeral volumes using the ephemeral field in volume components Sets the default answer to yes when asking for telemetry opt-in Improves metrics by sending additional telemetry data to the devfile registry Updates the bootstrap image to registry.access.redhat.com/ocp-tools-4/odo-init-container-rhel8:1.1.11 The upstream repository is available at https://github.com/redhat-developer/odo 3.1.2. Bug fixes Previously, odo deploy would fail if the .odo/env file did not exist. The command now creates the .odo/env file if required. Previously, interactive component creation using the odo create command would fail if disconnect from the cluster. This issue is fixed in the latest release. 3.1.3. Getting support For Product If you find an error, encounter a bug, or have suggestions for improving the functionality of odo , file an issue in Bugzilla . Choose OpenShift Developer Tools and Services as a product type and odo as a component. Provide as many details in the issue description as possible. For Documentation If you find an error or have suggestions for improving the documentation, file a Jira issue for the most relevant documentation component. 3.2. Understanding odo Red Hat OpenShift Developer CLI ( odo ) is a tool for creating applications on OpenShift Container Platform and Kubernetes. With odo , you can develop, test, debug, and deploy microservices-based applications on a Kubernetes cluster without having a deep understanding of the platform. odo follows a create and push workflow. As a user, when you create , the information (or manifest) is stored in a configuration file. When you push , the corresponding resources are created on the Kubernetes cluster. All of this configuration is stored in the Kubernetes API for seamless accessibility and functionality. odo uses service and link commands to link components and services together. odo achieves this by creating and deploying services based on Kubernetes Operators in the cluster. Services can be created using any of the Operators available on the Operator Hub. After linking a service, odo injects the service configuration into the component. Your application can then use this configuration to communicate with the Operator-backed service. 3.2.1. odo key features odo is designed to be a developer-friendly interface to Kubernetes, with the ability to: Quickly deploy applications on a Kubernetes cluster by creating a new manifest or using an existing one Use commands to easily create and update the manifest, without the need to understand and maintain Kubernetes configuration files Provide secure access to applications running on a Kubernetes cluster Add and remove additional storage for applications on a Kubernetes cluster Create Operator-backed services and link your application to them Create a link between multiple microservices that are deployed as odo components Remotely debug applications you deployed using odo in your IDE Easily test applications deployed on Kubernetes using odo 3.2.2. odo core concepts odo abstracts Kubernetes concepts into terminology that is familiar to developers: Application A typical application, developed with a cloud-native approach , that is used to perform a particular task. Examples of applications include online video streaming, online shopping, and hotel reservation systems. Component A set of Kubernetes resources that can run and be deployed separately. A cloud-native application is a collection of small, independent, loosely coupled components . Examples of components include an API back-end, a web interface, and a payment back-end. Project A single unit containing your source code, tests, and libraries. Context A directory that contains the source code, tests, libraries, and odo config files for a single component. URL A mechanism to expose a component for access from outside the cluster. Storage Persistent storage in the cluster. It persists the data across restarts and component rebuilds. Service An external application that provides additional functionality to a component. Examples of services include PostgreSQL, MySQL, Redis, and RabbitMQ. In odo , services are provisioned from the OpenShift Service Catalog and must be enabled within your cluster. devfile An open standard for defining containerized development environments that enables developer tools to simplify and accelerate workflows. For more information, see the documentation at https://devfile.io . You can connect to publicly available devfile registries, or you can install a Secure Registry. 3.2.3. Listing components in odo odo uses the portable devfile format to describe components and their related URLs, storage, and services. odo can connect to various devfile registries to download devfiles for different languages and frameworks. See the documentation for the odo registry command for more information on how to manage the registries used by odo to retrieve devfile information. You can list all the devfiles available of the different registries with the odo catalog list components command. Procedure Log in to the cluster with odo : USD odo login -u developer -p developer List the available odo components: USD odo catalog list components Example output Odo Devfile Components: NAME DESCRIPTION REGISTRY dotnet50 Stack with .NET 5.0 DefaultDevfileRegistry dotnet60 Stack with .NET 6.0 DefaultDevfileRegistry dotnetcore31 Stack with .NET Core 3.1 DefaultDevfileRegistry go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry java-openliberty Java application Maven-built stack using the Open Liberty ru... DefaultDevfileRegistry java-openliberty-gradle Java application Gradle-built stack using the Open Liberty r... DefaultDevfileRegistry java-quarkus Quarkus with Java DefaultDevfileRegistry java-springboot Spring Boot(R) using Java DefaultDevfileRegistry java-vertx Upstream Vert.x using Java DefaultDevfileRegistry java-websphereliberty Java application Maven-built stack using the WebSphere Liber... DefaultDevfileRegistry java-websphereliberty-gradle Java application Gradle-built stack using the WebSphere Libe... DefaultDevfileRegistry java-wildfly Upstream WildFly DefaultDevfileRegistry java-wildfly-bootable-jar Java stack with WildFly in bootable Jar mode, OpenJDK 11 and... DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry nodejs-angular Stack with Angular 12 DefaultDevfileRegistry nodejs-nextjs Stack with .js 11 DefaultDevfileRegistry nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry nodejs-react Stack with React 17 DefaultDevfileRegistry nodejs-svelte Stack with Svelte 3 DefaultDevfileRegistry nodejs-vue Stack with Vue 3 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry python-django Python3.7 with Django DefaultDevfileRegistry 3.2.4. Telemetry in odo odo collects information about how it is being used, including metrics on the operating system, RAM, CPU, number of cores, odo version, errors, success/failures, and how long odo commands take to complete. You can modify your telemetry consent by using the odo preference command: odo preference set ConsentTelemetry true consents to telemetry. odo preference unset ConsentTelemetry disables telemetry. odo preference view shows the current preferences. 3.3. Installing odo You can install the odo CLI on Linux, Windows, or macOS by downloading a binary. You can also install the OpenShift VS Code extension, which uses both the odo and the oc binaries to interact with your OpenShift Container Platform cluster. For Red Hat Enterprise Linux (RHEL), you can install the odo CLI as an RPM. Note Currently, odo does not support installation in a restricted network environment. 3.3.1. Installing odo on Linux The odo CLI is available to download as a binary and as a tarball for multiple operating systems and architectures including: Operating System Binary Tarball Linux odo-linux-amd64 odo-linux-amd64.tar.gz Linux on IBM Power odo-linux-ppc64le odo-linux-ppc64le.tar.gz Linux on IBM Z and LinuxONE odo-linux-s390x odo-linux-s390x.tar.gz Procedure Navigate to the content gateway and download the appropriate file for your operating system and architecture. If you download the binary, rename it to odo : USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo If you download the tarball, extract the binary: USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz -o odo.tar.gz USD tar xvzf odo.tar.gz Change the permissions on the binary: USD chmod +x <filename> Place the odo binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verify that odo is now available on your system: USD odo version 3.3.2. Installing odo on Windows The odo CLI for Windows is available to download as a binary and as an archive. Operating System Binary Tarball Windows odo-windows-amd64.exe odo-windows-amd64.exe.zip Procedure Navigate to the content gateway and download the appropriate file: If you download the binary, rename it to odo.exe . If you download the archive, unzip the binary with a ZIP program and then rename it to odo.exe . Move the odo.exe binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verify that odo is now available on your system: C:\> odo version 3.3.3. Installing odo on macOS The odo CLI for macOS is available to download as a binary and as a tarball. Operating System Binary Tarball macOS odo-darwin-amd64 odo-darwin-amd64.tar.gz Procedure Navigate to the content gateway and download the appropriate file: If you download the binary, rename it to odo : USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo If you download the tarball, extract the binary: USD curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz -o odo.tar.gz USD tar xvzf odo.tar.gz Change the permissions on the binary: # chmod +x odo Place the odo binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verify that odo is now available on your system: USD odo version 3.3.4. Installing odo on VS Code The OpenShift VS Code extension uses both odo and the oc binary to interact with your OpenShift Container Platform cluster. To work with these features, install the OpenShift VS Code extension on VS Code. Prerequisites You have installed VS Code. Procedure Open VS Code. Launch VS Code Quick Open with Ctrl + P . Enter the following command: 3.3.5. Installing odo on Red Hat Enterprise Linux (RHEL) using an RPM For Red Hat Enterprise Linux (RHEL), you can install the odo CLI as an RPM. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift Developer Tools and Services*' In the output of the command, find the Pool ID field for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by odo : # subscription-manager repos --enable="ocp-tools-4.9-for-rhel-8-x86_64-rpms" Install the odo package: # yum install odo Verify that odo is now available on your system: USD odo version 3.4. Configuring the odo CLI You can find the global settings for odo in the preference.yaml file which is located by default in your USDHOME/.odo directory. You can set a different location for the preference.yaml file by exporting the GLOBALODOCONFIG variable. 3.4.1. Viewing the current configuration You can view the current odo CLI configuration by using the following command: USD odo preference view Example output PARAMETER CURRENT_VALUE UpdateNotification NamePrefix Timeout BuildTimeout PushTimeout Ephemeral ConsentTelemetry true 3.4.2. Setting a value You can set a value for a preference key by using the following command: USD odo preference set <key> <value> Note Preference keys are case-insensitive. Example command USD odo preference set updatenotification false Example output Global preference was successfully updated 3.4.3. Unsetting a value You can unset a value for a preference key by using the following command: USD odo preference unset <key> Note You can use the -f flag to skip the confirmation. Example command USD odo preference unset updatenotification ? Do you want to unset updatenotification in the preference (y/N) y Example output Global preference was successfully updated 3.4.4. Preference key table The following table shows the available options for setting preference keys for the odo CLI: Preference key Description Default value UpdateNotification Control whether a notification to update odo is shown. True NamePrefix Set a default name prefix for an odo resource. For example, component or storage . Current directory name Timeout Timeout for the Kubernetes server connection check. 1 second BuildTimeout Timeout for waiting for a build of the git component to complete. 300 seconds PushTimeout Timeout for waiting for a component to start. 240 seconds Ephemeral Controls whether odo should create an emptyDir volume to store source code. True ConsentTelemetry Controls whether odo can collect telemetry for the user's odo usage. False 3.4.5. Ignoring files or patterns You can configure a list of files or patterns to ignore by modifying the .odoignore file in the root directory of your application. This applies to both odo push and odo watch . If the .odoignore file does not exist, the .gitignore file is used instead for ignoring specific files and folders. To ignore .git files, any files with the .js extension, and the folder tests , add the following to either the .odoignore or the .gitignore file: The .odoignore file allows any glob expressions. 3.5. odo CLI reference 3.5.1. odo build-images odo can build container images based on Dockerfiles, and push these images to their registries. When running the odo build-images command, odo searches for all components in the devfile.yaml with the image type, for example: components: - image: imageName: quay.io/myusername/myimage dockerfile: uri: ./Dockerfile <.> buildContext: USD{PROJECTS_ROOT} <.> name: component-built-from-dockerfile <.> The uri field indicates the relative path of the Dockerfile to use, relative to the directory containing the devfile.yaml . The devfile specification indicates that uri could also be an HTTP URL, but this case is not supported by odo yet. <.> The buildContext indicates the directory used as build context. The default value is USD{PROJECTS_ROOT} . For each image component, odo executes either podman or docker (the first one found, in this order), to build the image with the specified Dockerfile, build context, and arguments. If the --push flag is passed to the command, the images are pushed to their registries after they are built. 3.5.2. odo catalog odo uses different catalogs to deploy components and services . 3.5.2.1. Components odo uses the portable devfile format to describe the components. It can connect to various devfile registries to download devfiles for different languages and frameworks. See odo registry for more information. 3.5.2.1.1. Listing components To list all the devfiles available on the different registries, run the command: USD odo catalog list components Example output NAME DESCRIPTION REGISTRY go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry [...] 3.5.2.1.2. Getting information about a component To get more information about a specific component, run the command: USD odo catalog describe component For example, run the command: USD odo catalog describe component nodejs Example output * Registry: DefaultDevfileRegistry <.> Starter Projects: <.> --- name: nodejs-starter attributes: {} description: "" subdir: "" projectsource: sourcetype: "" git: gitlikeprojectsource: commonprojectsource: {} checkoutfrom: null remotes: origin: https://github.com/odo-devfiles/nodejs-ex.git zip: null custom: null <.> Registry is the registry from which the devfile is retrieved. <.> Starter projects are sample projects in the same language and framework of the devfile, that can help you start a new project. See odo create for more information on creating a project from a starter project. 3.5.2.2. Services odo can deploy services with the help of Operators . Only Operators deployed with the help of the Operator Lifecycle Manager are supported by odo. 3.5.2.2.1. Listing services To list the available Operators and their associated services, run the command: USD odo catalog list services Example output Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database redis-operator.v0.8.0 RedisCluster, Redis In this example, two Operators are installed in the cluster. The postgresql-operator.v0.1.1 Operator deploys services related to PostgreSQL: Backup and Database . The redis-operator.v0.8.0 Operator deploys services related to Redis: RedisCluster and Redis . Note To get a list of all the available Operators, odo fetches the ClusterServiceVersion (CSV) resources of the current namespace that are in a Succeeded phase. For Operators that support cluster-wide access, when a new namespace is created, these resources are automatically added to it. However, it may take some time before they are in the Succeeded phase, and odo may return an empty list until the resources are ready. 3.5.2.2.2. Searching services To search for a specific service by a keyword, run the command: USD odo catalog search service For example, to retrieve the PostgreSQL services, run the command: USD odo catalog search service postgres Example output Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database You will see a list of Operators that contain the searched keyword in their name. 3.5.2.2.3. Getting information about a service To get more information about a specific service, run the command: USD odo catalog describe service For example: USD odo catalog describe service postgresql-operator.v0.1.1/Database Example output KIND: Database VERSION: v1alpha1 DESCRIPTION: Database is the Schema for the the Database Database API FIELDS: awsAccessKeyId (string) AWS S3 accessKey/token ID Key ID of AWS S3 storage. Default Value: nil Required to create the Secret with the data to allow send the backup files to AWS S3 storage. [...] A service is represented in the cluster by a CustomResourceDefinition (CRD) resource. The command displays the details about the CRD such as kind , version , and the list of fields available to define an instance of this custom resource. The list of fields is extracted from the OpenAPI schema included in the CRD. This information is optional in a CRD, and if it is not present, it is extracted from the ClusterServiceVersion (CSV) resource representing the service instead. It is also possible to request the description of an Operator-backed service, without providing CRD type information. To describe the Redis Operator on a cluster, without CRD, run the following command: USD odo catalog describe service redis-operator.v0.8.0 Example output NAME: redis-operator.v0.8.0 DESCRIPTION: A Golang based redis operator that will make/oversee Redis standalone/cluster mode setup on top of the Kubernetes. It can create a redis cluster setup with best practices on Cloud as well as the Bare metal environment. Also, it provides an in-built monitoring capability using ... (cut short for beverity) Logging Operator is licensed under [Apache License, Version 2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE) CRDs: NAME DESCRIPTION RedisCluster Redis Cluster Redis Redis 3.5.3. odo create odo uses a devfile to store the configuration of a component and to describe the component's resources such as storage and services. The odo create command generates this file. 3.5.3.1. Creating a component To create a devfile for an existing project, run the odo create command with the name and type of your component (for example, nodejs or go ): odo create nodejs mynodejs In the example, nodejs is the type of the component and mynodejs is the name of the component that odo creates for you. Note For a list of all the supported component types, run the command odo catalog list components . If your source code exists outside the current directory, the --context flag can be used to specify the path. For example, if the source for the nodejs component is in a folder called node-backend relative to the current working directory, run the command: odo create nodejs mynodejs --context ./node-backend The --context flag supports relative and absolute paths. To specify the project or app where your component will be deployed, use the --project and --app flags. For example, to create a component that is part of the myapp app inside the backend project, run the command: odo create nodejs --app myapp --project backend Note If these flags are not specified, they will default to the active app and project. 3.5.3.2. Starter projects Use the starter projects if you do not have existing source code but want to get up and running quickly to experiment with devfiles and components. To use a starter project, add the --starter flag to the odo create command. To get a list of available starter projects for a component type, run the odo catalog describe component command. For example, to get all available starter projects for the nodejs component type, run the command: odo catalog describe component nodejs Then specify the desired project using the --starter flag on the odo create command: odo create nodejs --starter nodejs-starter This will download the example template corresponding to the chosen component type, in this instance, nodejs . The template is downloaded to your current directory, or to the location specified by the --context flag. If a starter project has its own devfile, then this devfile will be preserved. 3.5.3.3. Using an existing devfile If you want to create a new component from an existing devfile, you can do so by specifying the path to the devfile using the --devfile flag. For example, to create a component called mynodejs , based on a devfile from GitHub, use the following command: odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml 3.5.3.4. Interactive creation You can also run the odo create command interactively, to guide you through the steps needed to create a component: USD odo create ? Which devfile component type do you wish to create go ? What do you wish to name the new devfile component go-api ? What project do you want the devfile component to be created in default Devfile Object Validation [✓] Checking devfile existence [164258ns] [✓] Creating a devfile component from registry: DefaultDevfileRegistry [246051ns] Validation [✓] Validating if devfile name is correct [92255ns] ? Do you want to download a starter project Yes Starter Project [✓] Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms] Please use odo push command to create the component with source deployed You are prompted to choose the component type, name, and the project for the component. You can also choose whether or not to download a starter project. Once finished, a new devfile.yaml file is created in the working directory. To deploy these resources to your cluster, run the command odo push . 3.5.4. odo delete The odo delete command is useful for deleting resources that are managed by odo . 3.5.4.1. Deleting a component To delete a devfile component, run the odo delete command: USD odo delete If the component has been pushed to the cluster, the component is deleted from the cluster, along with its dependent storage, URL, secrets, and other resources. If the component has not been pushed, the command exits with an error stating that it could not find the resources on the cluster. Use the -f or --force flag to avoid the confirmation questions. 3.5.4.2. Undeploying devfile Kubernetes components To undeploy the devfile Kubernetes components, that have been deployed with odo deploy , execute the odo delete command with the --deploy flag: USD odo delete --deploy Use the -f or --force flag to avoid the confirmation questions. 3.5.4.3. Delete all To delete all artifacts including the following items, run the odo delete command with the --all flag : devfile component Devfile Kubernetes component that was deployed using the odo deploy command Devfile Local configuration USD odo delete --all 3.5.4.4. Available flags -f , --force Use this flag to avoid the confirmation questions. -w , --wait Use this flag to wait for component deletion and any dependencies. This flag does not work when undeploying. The documentation on Common Flags provides more information on the flags available for commands. 3.5.5. odo deploy odo can be used to deploy components in a manner similar to how they would be deployed using a CI/CD system. First, odo builds the container images, and then it deploys the Kubernetes resources required to deploy the components. When running the command odo deploy , odo searches for the default command of kind deploy in the devfile, and executes this command. The kind deploy is supported by the devfile format starting from version 2.2.0. The deploy command is typically a composite command, composed of several apply commands: A command referencing an image component that, when applied, will build the image of the container to deploy, and then push it to its registry. A command referencing a Kubernetes component that, when applied, will create a Kubernetes resource in the cluster. With the following example devfile.yaml file, a container image is built using the Dockerfile present in the directory. The image is pushed to its registry and then a Kubernetes Deployment resource is created in the cluster, using this freshly built image. schemaVersion: 2.2.0 [...] variables: CONTAINER_IMAGE: quay.io/phmartin/myimage commands: - id: build-image apply: component: outerloop-build - id: deployk8s apply: component: outerloop-deploy - id: deploy composite: commands: - build-image - deployk8s group: kind: deploy isDefault: true components: - name: outerloop-build image: imageName: "{{CONTAINER_IMAGE}}" dockerfile: uri: ./Dockerfile buildContext: USD{PROJECTS_ROOT} - name: outerloop-deploy kubernetes: inlined: | kind: Deployment apiVersion: apps/v1 metadata: name: my-component spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: main image: {{CONTAINER_IMAGE}} 3.5.6. odo link The odo link command helps link an odo component to an Operator-backed service or another odo component. It does this by using the Service Binding Operator . Currently, odo makes use of the Service Binding library and not the Operator itself to achieve the desired functionality. 3.5.6.1. Various linking options odo provides various options for linking a component with an Operator-backed service or another odo component. All these options (or flags) can be used whether you are linking a component to a service or to another component. 3.5.6.1.1. Default behavior By default, the odo link command creates a directory named kubernetes/ in your component directory and stores the information (YAML manifests) about services and links there. When you use odo push , odo compares these manifests with the state of the resources on the Kubernetes cluster and decides whether it needs to create, modify or destroy resources to match what is specified by the user. 3.5.6.1.2. The --inlined flag If you specify the --inlined flag to the odo link command, odo stores the link information inline in the devfile.yaml in the component directory, instead of creating a file under the kubernetes/ directory. The behavior of the --inlined flag is similar in both the odo link and odo service create commands. This flag is helpful if you want everything stored in a single devfile.yaml . You have to remember to use --inlined flag with each odo link and odo service create command that you execute for the component. 3.5.6.1.3. The --map flag Sometimes, you might want to add more binding information to the component, in addition to what is available by default. For example, if you are linking the component with a service and would like to bind some information from the service's spec (short for specification), you could use the --map flag. Note that odo does not do any validation against the spec of the service or component being linked. Using this flag is only recommended if you are comfortable using the Kubernetes YAML manifests. 3.5.6.1.4. The --bind-as-files flag For all the linking options discussed so far, odo injects the binding information into the component as environment variables. If you would like to mount this information as files instead, you can use the --bind-as-files flag. This will make odo inject the binding information as files into the /bindings location within your component's Pod. Compared to the environment variables scenario, when you use --bind-as-files , the files are named after the keys and the value of these keys is stored as the contents of these files. 3.5.6.2. Examples 3.5.6.2.1. Default odo link In the following example, the backend component is linked with the PostgreSQL service using the default odo link command. For the backend component, make sure that your component and service are pushed to the cluster: USD odo list Sample output APP NAME PROJECT TYPE STATE MANAGED BY ODO app backend myproject spring Pushed Yes USD odo service list Sample output NAME MANAGED BY ODO STATE AGE PostgresCluster/hippo Yes (backend) Pushed 59m41s Now, run odo link to link the backend component with the PostgreSQL service: USD odo link PostgresCluster/hippo Example output [✓] Successfully created link between component "backend" and service "PostgresCluster/hippo" To apply the link, please use `odo push` And then run odo push to actually create the link on the Kubernetes cluster. After a successful odo push , you will see a few outcomes: When you open the URL for the application deployed by backend component, it shows a list of todo items in the database. For example, in the output for the odo url list command, the path where todos are listed is included: USD odo url list Sample output Found the following URLs for component backend NAME STATE URL PORT SECURE KIND 8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress The correct path for the URL would be http://8080-tcp.192.168.39.112.nip.io/api/v1/todos. The exact URL depends on your setup. Also note that there are no todos in the database unless you add some, so the URL might just show an empty JSON object. You can see binding information related to the Postgres service injected into the backend component. This binding information is injected, by default, as environment variables. You can check it using the odo describe command from the backend component's directory: USD odo describe Example output: Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Environment Variables: · POSTGRESCLUSTER_PGBOUNCER-EMPTY · POSTGRESCLUSTER_PGBOUNCER.INI · POSTGRESCLUSTER_ROOT.CRT · POSTGRESCLUSTER_VERIFIER · POSTGRESCLUSTER_ID_ECDSA · POSTGRESCLUSTER_PGBOUNCER-VERIFIER · POSTGRESCLUSTER_TLS.CRT · POSTGRESCLUSTER_PGBOUNCER-URI · POSTGRESCLUSTER_PATRONI.CRT-COMBINED · POSTGRESCLUSTER_USER · pgImage · pgVersion · POSTGRESCLUSTER_CLUSTERIP · POSTGRESCLUSTER_HOST · POSTGRESCLUSTER_PGBACKREST_REPO.CONF · POSTGRESCLUSTER_PGBOUNCER-USERS.TXT · POSTGRESCLUSTER_SSH_CONFIG · POSTGRESCLUSTER_TLS.KEY · POSTGRESCLUSTER_CONFIG-HASH · POSTGRESCLUSTER_PASSWORD · POSTGRESCLUSTER_PATRONI.CA-ROOTS · POSTGRESCLUSTER_DBNAME · POSTGRESCLUSTER_PGBOUNCER-PASSWORD · POSTGRESCLUSTER_SSHD_CONFIG · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY · POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS · POSTGRESCLUSTER_PGBOUNCER-HOST · POSTGRESCLUSTER_PORT · POSTGRESCLUSTER_ROOT.KEY · POSTGRESCLUSTER_SSH_KNOWN_HOSTS · POSTGRESCLUSTER_URI · POSTGRESCLUSTER_PATRONI.YAML · POSTGRESCLUSTER_DNS.CRT · POSTGRESCLUSTER_DNS.KEY · POSTGRESCLUSTER_ID_ECDSA.PUB · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT · POSTGRESCLUSTER_PGBOUNCER-PORT · POSTGRESCLUSTER_CA.CRT Some of these variables are used in the backend component's src/main/resources/application.properties file so that the Java Spring Boot application can connect to the PostgreSQL database service. Lastly, odo has created a directory called kubernetes/ in your backend component's directory that contains the following files: USD ls kubernetes odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml These files contain the information (YAML manifests) for two resources: odo-service-hippo.yaml - the Postgres service created using odo service create --from-file ../postgrescluster.yaml command. odo-service-backend-postgrescluster-hippo.yaml - the link created using odo link command. 3.5.6.2.2. Using odo link with the --inlined flag Using the --inlined flag with the odo link command has the same effect as an odo link command without the flag, in that it injects binding information. However, the subtle difference is that in the above case, there are two manifest files under kubernetes/ directory, one for the Postgres service and another for the link between the backend component and this service. However, when you pass the --inlined flag, odo does not create a file under the kubernetes/ directory to store the YAML manifest, but rather stores it inline in the devfile.yaml file. To see this, unlink the component from the PostgreSQL service first: USD odo unlink PostgresCluster/hippo Example output: [✓] Successfully unlinked component "backend" from service "PostgresCluster/hippo" To apply the changes, please use `odo push` To unlink them on the cluster, run odo push . Now if you inspect the kubernetes/ directory, you see only one file: USD ls kubernetes odo-service-hippo.yaml , use the --inlined flag to create a link: USD odo link PostgresCluster/hippo --inlined Example output: [✓] Successfully created link between component "backend" and service "PostgresCluster/hippo" To apply the link, please use `odo push` You need to run odo push for the link to get created on the cluster, like the procedure that omits the --inlined flag. odo stores the configuration in devfile.yaml . In this file, you can see an entry like the following: kubernetes: inlined: | apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: creationTimestamp: null name: backend-postgrescluster-hippo spec: application: group: apps name: backend-app resource: deployments version: v1 bindAsFiles: false detectBindingResources: true services: - group: postgres-operator.crunchydata.com id: hippo kind: PostgresCluster name: hippo version: v1beta1 status: secret: "" name: backend-postgrescluster-hippo Now if you were to run odo unlink PostgresCluster/hippo , odo would first remove the link information from the devfile.yaml , and then a subsequent odo push would delete the link from the cluster. 3.5.6.2.3. Custom bindings odo link accepts the flag --map which can inject custom binding information into the component. Such binding information will be fetched from the manifest of the resource that you are linking to your component. For example, in the context of the backend component and PostgreSQL service, you can inject information from the PostgreSQL service's manifest postgrescluster.yaml file into the backend component. If the name of your PostgresCluster service is hippo (or the output of odo service list , if your PostgresCluster service is named differently), when you want to inject the value of postgresVersion from that YAML definition into your backend component, run the command: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' Note that, if the name of your Postgres service is different from hippo , you will have to specify that in the above command in the place of .hippo in the value for pgVersion . After a link operation, run odo push as usual. Upon successful completion of the push operation, you can run the following command from your backend component directory, to validate if the custom mapping got injected properly: USD odo exec -- env | grep pgVersion Example output: pgVersion=13 Since you might want to inject more than just one piece of custom binding information, odo link accepts multiple key-value pairs of mappings. The only constraint is that these should be specified as --map <key>=<value> . For example, if you want to also inject PostgreSQL image information along with the version, you could run: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' and then run odo push . To validate if both the mappings got injected correctly, run the following command: USD odo exec -- env | grep -e "pgVersion\|pgImage" Example output: pgVersion=13 pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 3.5.6.2.3.1. To inline or not? You can accept the default behavior where odo link generate a manifests file for the link under kubernetes/ directory. Alternatively, you can use the --inlined flag if you prefer to store everything in a single devfile.yaml file. 3.5.6.3. Binding as files Another helpful flag that odo link provides is --bind-as-files . When this flag is passed, the binding information is not injected into the component's Pod as environment variables but is mounted as a filesystem. Ensure that there are no existing links between the backend component and the PostgreSQL service. You could do this by running odo describe in the backend component's directory and check if you see output similar to the following: Linked Services: · PostgresCluster/hippo Unlink the service from the component using: USD odo unlink PostgresCluster/hippo USD odo push 3.5.6.4. --bind-as-files examples 3.5.6.4.1. Using the default odo link By default, odo creates the manifest file under the kubernetes/ directory, for storing the link information. Link the backend component and PostgreSQL service using: USD odo link PostgresCluster/hippo --bind-as-files USD odo push Example odo describe output: USD odo describe Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 · SERVICE_BINDING_ROOT=/bindings · SERVICE_BINDING_ROOT=/bindings Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Files: · /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf · /bindings/backend-postgrescluster-hippo/user · /bindings/backend-postgrescluster-hippo/ssh_known_hosts · /bindings/backend-postgrescluster-hippo/clusterIP · /bindings/backend-postgrescluster-hippo/password · /bindings/backend-postgrescluster-hippo/patroni.yaml · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-host · /bindings/backend-postgrescluster-hippo/root.key · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key · /bindings/backend-postgrescluster-hippo/pgbouncer.ini · /bindings/backend-postgrescluster-hippo/uri · /bindings/backend-postgrescluster-hippo/config-hash · /bindings/backend-postgrescluster-hippo/pgbouncer-empty · /bindings/backend-postgrescluster-hippo/port · /bindings/backend-postgrescluster-hippo/dns.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-uri · /bindings/backend-postgrescluster-hippo/root.crt · /bindings/backend-postgrescluster-hippo/ssh_config · /bindings/backend-postgrescluster-hippo/dns.key · /bindings/backend-postgrescluster-hippo/host · /bindings/backend-postgrescluster-hippo/patroni.crt-combined · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots · /bindings/backend-postgrescluster-hippo/tls.key · /bindings/backend-postgrescluster-hippo/verifier · /bindings/backend-postgrescluster-hippo/ca.crt · /bindings/backend-postgrescluster-hippo/dbname · /bindings/backend-postgrescluster-hippo/patroni.ca-roots · /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf · /bindings/backend-postgrescluster-hippo/pgbouncer-port · /bindings/backend-postgrescluster-hippo/pgbouncer-verifier · /bindings/backend-postgrescluster-hippo/id_ecdsa · /bindings/backend-postgrescluster-hippo/id_ecdsa.pub · /bindings/backend-postgrescluster-hippo/pgbouncer-password · /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt · /bindings/backend-postgrescluster-hippo/sshd_config · /bindings/backend-postgrescluster-hippo/tls.crt Everything that was an environment variable in the key=value format in the earlier odo describe output is now mounted as a file. Use the cat command to view the contents of some of these files: Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/password Example output: q({JC:jn^mm/Bw}eu+j.GX{k Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/user Example output: hippo Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP Example output: 10.101.78.56 3.5.6.4.2. Using --inlined The result of using --bind-as-files and --inlined together is similar to using odo link --inlined . The manifest of the link gets stored in the devfile.yaml , instead of being stored in a separate file under kubernetes/ directory. Other than that, the odo describe output would be the same as earlier. 3.5.6.4.3. Custom bindings When you pass custom bindings while linking the backend component with the PostgreSQL service, these custom bindings are injected not as environment variables but are mounted as files. For example: USD odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files USD odo push These custom bindings get mounted as files instead of being injected as environment variables. To validate that this worked, run the following command: Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion Example output: 13 Example command: USD odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage Example output: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 3.5.7. odo registry odo uses the portable devfile format to describe the components. odo can connect to various devfile registries, to download devfiles for different languages and frameworks. You can connect to publicly available devfile registries, or you can install your own Secure Registry . You can use the odo registry command to manage the registries that are used by odo to retrieve devfile information. 3.5.7.1. Listing the registries To list the registries currently contacted by odo , run the command: USD odo registry list Example output: NAME URL SECURE DefaultDevfileRegistry https://registry.devfile.io No DefaultDevfileRegistry is the default registry used by odo; it is provided by the devfile.io project. 3.5.7.2. Adding a registry To add a registry, run the command: USD odo registry add Example output: USD odo registry add StageRegistry https://registry.stage.devfile.io New registry successfully added If you are deploying your own Secure Registry, you can specify the personal access token to authenticate to the secure registry with the --token flag: USD odo registry add MyRegistry https://myregistry.example.com --token <access_token> New registry successfully added 3.5.7.3. Deleting a registry To delete a registry, run the command: USD odo registry delete Example output: USD odo registry delete StageRegistry ? Are you sure you want to delete registry "StageRegistry" Yes Successfully deleted registry Use the --force (or -f ) flag to force the deletion of the registry without confirmation. 3.5.7.4. Updating a registry To update the URL or the personal access token of a registry already registered, run the command: USD odo registry update Example output: USD odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token> ? Are you sure you want to update registry "MyRegistry" Yes Successfully updated registry Use the --force (or -f ) flag to force the update of the registry without confirmation. 3.5.8. odo service odo can deploy services with the help of Operators . The list of available Operators and services available for installation can be found using the odo catalog command. Services are created in the context of a component , so run the odo create command before you deploy services. A service is deployed using two steps: Define the service and store its definition in the devfile. Deploy the defined service to the cluster, using the odo push command. 3.5.8.1. Creating a new service To create a new service, run the command: USD odo service create For example, to create an instance of a Redis service named my-redis-service , you can run the following command: Example output USD odo catalog list services Services available through Operators NAME CRDs redis-operator.v0.8.0 RedisCluster, Redis USD odo service create redis-operator.v0.8.0/Redis my-redis-service Successfully added service to the configuration; do 'odo push' to create service on the cluster This command creates a Kubernetes manifest in the kubernetes/ directory, containing the definition of the service, and this file is referenced from the devfile.yaml file. USD cat kubernetes/odo-service-my-redis-service.yaml Example output apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi Example command USD cat devfile.yaml Example output [...] components: - kubernetes: uri: kubernetes/odo-service-my-redis-service.yaml name: my-redis-service [...] Note that the name of the created instance is optional. If you do not provide a name, it will be the lowercase name of the service. For example, the following command creates an instance of a Redis service named redis : USD odo service create redis-operator.v0.8.0/Redis 3.5.8.1.1. Inlining the manifest By default, a new manifest is created in the kubernetes/ directory, referenced from the devfile.yaml file. It is possible to inline the manifest inside the devfile.yaml file using the --inlined flag: USD odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined Successfully added service to the configuration; do 'odo push' to create service on the cluster Example command USD cat devfile.yaml Example output [...] components: - kubernetes: inlined: | apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: my-redis-service [...] 3.5.8.1.2. Configuring the service Without specific customization, the service will be created with a default configuration. You can use either command-line arguments or a file to specify your own configuration. 3.5.8.1.2.1. Using command-line arguments Use the --parameters (or -p ) flag to specify your own configuration. The following example configures the Redis service with three parameters: USD odo service create redis-operator.v0.8.0/Redis my-redis-service \ -p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 \ -p kubernetesConfig.serviceType=ClusterIP \ -p redisExporter.image=quay.io/opstree/redis-exporter:1.0 Successfully added service to the configuration; do 'odo push' to create service on the cluster Example command USD cat kubernetes/odo-service-my-redis-service.yaml Example output apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 You can obtain the possible parameters for a specific service using the odo catalog describe service command. 3.5.8.1.2.2. Using a file Use a YAML manifest to configure your own specification. In the following example, the Redis service is configured with three parameters. Create a manifest: USD cat > my-redis.yaml <<EOF apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 EOF Create the service from the manifest: USD odo service create --from-file my-redis.yaml Successfully added service to the configuration; do 'odo push' to create service on the cluster 3.5.8.2. Deleting a service To delete a service, run the command: USD odo service delete Example output USD odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service Yes (api) Deleted locally 5m39s USD odo service delete Redis/my-redis-service ? Are you sure you want to delete Redis/my-redis-service Yes Service "Redis/my-redis-service" has been successfully deleted; do 'odo push' to delete service from the cluster Use the --force (or -f ) flag to force the deletion of the service without confirmation. 3.5.8.3. Listing services To list the services created for your component, run the command: USD odo service list Example output USD odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service-1 Yes (api) Not pushed Redis/my-redis-service-2 Yes (api) Pushed 52s Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s For each service, STATE indicates if the service has been pushed to the cluster using the odo push command, or if the service is still running on the cluster but removed from the devfile locally using the odo service delete command. 3.5.8.4. Getting information about a service To get details of a service such as its kind, version, name, and list of configured parameters, run the command: USD odo service describe Example output USD odo service describe Redis/my-redis-service Version: redis.redis.opstreelabs.in/v1beta1 Kind: Redis Name: my-redis-service Parameters: NAME VALUE kubernetesConfig.image quay.io/opstree/redis:v6.2.5 kubernetesConfig.serviceType ClusterIP redisExporter.image quay.io/opstree/redis-exporter:1.0 3.5.9. odo storage odo lets users manage storage volumes that are attached to the components. A storage volume can be either an ephemeral volume using an emptyDir Kubernetes volume, or a Persistent Volume Claim (PVC). A PVC allows users to claim a persistent volume (such as a GCE PersistentDisk or an iSCSI volume) without understanding the details of the particular cloud environment. The persistent storage volume can be used to persist data across restarts and rebuilds of the component. 3.5.9.1. Adding a storage volume To add a storage volume to the cluster, run the command: USD odo storage create Example output: USD odo storage create store --path /data --size 1Gi [✓] Added storage store to nodejs-project-ufyy USD odo storage create tempdir --path /tmp --size 2Gi --ephemeral [✓] Added storage tempdir to nodejs-project-ufyy Please use `odo push` command to make the storage accessible to the component In the above example, the first storage volume has been mounted to the /data path and has a size of 1Gi , and the second volume has been mounted to /tmp and is ephemeral. 3.5.9.2. Listing the storage volumes To check the storage volumes currently used by the component, run the command: USD odo storage list Example output: USD odo storage list The component 'nodejs-project-ufyy' has the following storage attached: NAME SIZE PATH STATE store 1Gi /data Not Pushed tempdir 2Gi /tmp Not Pushed 3.5.9.3. Deleting a storage volume To delete a storage volume, run the command: USD odo storage delete Example output: USD odo storage delete store -f Deleted storage store from nodejs-project-ufyy Please use `odo push` command to delete the storage from the cluster In the above example, using the -f flag force deletes the storage without asking user permission. 3.5.9.4. Adding storage to specific container If your devfile has multiple containers, you can specify which container you want the storage to attach to, using the --container flag in the odo storage create command. The following example is an excerpt from a devfile with multiple containers : components: - name: nodejs1 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi endpoints: - name: "3000-tcp" targetPort: 3000 mountSources: true - name: nodejs2 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi In the example, there are two containers, nodejs1 and nodejs2 . To attach storage to the nodejs2 container, use the following command: USD odo storage create --container Example output: USD odo storage create store --path /data --size 1Gi --container nodejs2 [✓] Added storage store to nodejs-testing-xnfg Please use `odo push` command to make the storage accessible to the component You can list the storage resources, using the odo storage list command: USD odo storage list Example output: The component 'nodejs-testing-xnfg' has the following storage attached: NAME SIZE PATH CONTAINER STATE store 1Gi /data nodejs2 Not Pushed 3.5.10. Common flags The following flags are available with most odo commands: Table 3.1. odo flags Command Description --context Set the context directory where the component is defined. --project Set the project for the component. Defaults to the project defined in the local configuration. If none is available, then current project on the cluster. --app Set the application of the component. Defaults to the application defined in the local configuration. If none is available, then app . --kubeconfig Set the path to the kubeconfig value if not using the default configuration. --show-log Use this flag to see the logs. -f , --force Use this flag to tell the command not to prompt the user for confirmation. -v , --v Set the verbosity level. See Logging in odo for more information. -h , --help Output the help for a command. Note Some flags might not be available for some commands. Run the command with the --help flag to get a list of all the available flags. 3.5.11. JSON output The odo commands that output content generally accept a -o json flag to output this content in JSON format, suitable for other programs to parse this output more easily. The output structure is similar to Kubernetes resources, with the kind , apiVersion , metadata , spec , and status fields. List commands return a List resource, containing an items (or similar) field listing the items of the list, with each item also being similar to Kubernetes resources. Delete commands return a Status resource; see the Status Kubernetes resource . Other commands return a resource associated with the command, for example, Application , Storage , URL , and so on. The full list of commands currently accepting the -o json flag is: Commands Kind (version) Kind (version) of list items Complete content? odo application describe Application (odo.dev/v1alpha1) n/a no odo application list List (odo.dev/v1alpha1) Application (odo.dev/v1alpha1) ? odo catalog list components List (odo.dev/v1alpha1) missing yes odo catalog list services List (odo.dev/v1alpha1) ClusterServiceVersion (operators.coreos.com/v1alpha1) ? odo catalog describe component missing n/a yes odo catalog describe service CRDDescription (odo.dev/v1alpha1) n/a yes odo component create Component (odo.dev/v1alpha1) n/a yes odo component describe Component (odo.dev/v1alpha1) n/a yes odo component list List (odo.dev/v1alpha1) Component (odo.dev/v1alpha1) yes odo config view DevfileConfiguration (odo.dev/v1alpha1) n/a yes odo debug info OdoDebugInfo (odo.dev/v1alpha1) n/a yes odo env view EnvInfo (odo.dev/v1alpha1) n/a yes odo preference view PreferenceList (odo.dev/v1alpha1) n/a yes odo project create Project (odo.dev/v1alpha1) n/a yes odo project delete Status (v1) n/a yes odo project get Project (odo.dev/v1alpha1) n/a yes odo project list List (odo.dev/v1alpha1) Project (odo.dev/v1alpha1) yes odo registry list List (odo.dev/v1alpha1) missing yes odo service create Service n/a yes odo service describe Service n/a yes odo service list List (odo.dev/v1alpha1) Service yes odo storage create Storage (odo.dev/v1alpha1) n/a yes odo storage delete Status (v1) n/a yes odo storage list List (odo.dev/v1alpha1) Storage (odo.dev/v1alpha1) yes odo url list List (odo.dev/v1alpha1) URL (odo.dev/v1alpha1) yes
[ "odo delete --deploy", "odo login -u developer -p developer", "odo catalog list components", "Odo Devfile Components: NAME DESCRIPTION REGISTRY dotnet50 Stack with .NET 5.0 DefaultDevfileRegistry dotnet60 Stack with .NET 6.0 DefaultDevfileRegistry dotnetcore31 Stack with .NET Core 3.1 DefaultDevfileRegistry go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry java-openliberty Java application Maven-built stack using the Open Liberty ru... DefaultDevfileRegistry java-openliberty-gradle Java application Gradle-built stack using the Open Liberty r... DefaultDevfileRegistry java-quarkus Quarkus with Java DefaultDevfileRegistry java-springboot Spring Boot(R) using Java DefaultDevfileRegistry java-vertx Upstream Vert.x using Java DefaultDevfileRegistry java-websphereliberty Java application Maven-built stack using the WebSphere Liber... DefaultDevfileRegistry java-websphereliberty-gradle Java application Gradle-built stack using the WebSphere Libe... DefaultDevfileRegistry java-wildfly Upstream WildFly DefaultDevfileRegistry java-wildfly-bootable-jar Java stack with WildFly in bootable Jar mode, OpenJDK 11 and... DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry nodejs-angular Stack with Angular 12 DefaultDevfileRegistry nodejs-nextjs Stack with Next.js 11 DefaultDevfileRegistry nodejs-nuxtjs Stack with Nuxt.js 2 DefaultDevfileRegistry nodejs-react Stack with React 17 DefaultDevfileRegistry nodejs-svelte Stack with Svelte 3 DefaultDevfileRegistry nodejs-vue Stack with Vue 3 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry python-django Python3.7 with Django DefaultDevfileRegistry", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-linux-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz", "chmod +x <filename>", "echo USDPATH", "odo version", "C:\\> path", "C:\\> odo version", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64 -o odo", "curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-darwin-amd64.tar.gz -o odo.tar.gz tar xvzf odo.tar.gz", "chmod +x odo", "echo USDPATH", "odo version", "ext install redhat.vscode-openshift-connector", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift Developer Tools and Services*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"ocp-tools-4.9-for-rhel-8-x86_64-rpms\"", "yum install odo", "odo version", "odo preference view", "PARAMETER CURRENT_VALUE UpdateNotification NamePrefix Timeout BuildTimeout PushTimeout Ephemeral ConsentTelemetry true", "odo preference set <key> <value>", "odo preference set updatenotification false", "Global preference was successfully updated", "odo preference unset <key>", "odo preference unset updatenotification ? Do you want to unset updatenotification in the preference (y/N) y", "Global preference was successfully updated", ".git *.js tests/", "components: - image: imageName: quay.io/myusername/myimage dockerfile: uri: ./Dockerfile <.> buildContext: USD{PROJECTS_ROOT} <.> name: component-built-from-dockerfile", "odo catalog list components", "NAME DESCRIPTION REGISTRY go Stack with the latest Go version DefaultDevfileRegistry java-maven Upstream Maven and OpenJDK 11 DefaultDevfileRegistry nodejs Stack with Node.js 14 DefaultDevfileRegistry php-laravel Stack with Laravel 8 DefaultDevfileRegistry python Python Stack with Python 3.7 DefaultDevfileRegistry [...]", "odo catalog describe component", "odo catalog describe component nodejs", "* Registry: DefaultDevfileRegistry <.> Starter Projects: <.> --- name: nodejs-starter attributes: {} description: \"\" subdir: \"\" projectsource: sourcetype: \"\" git: gitlikeprojectsource: commonprojectsource: {} checkoutfrom: null remotes: origin: https://github.com/odo-devfiles/nodejs-ex.git zip: null custom: null", "odo catalog list services", "Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database redis-operator.v0.8.0 RedisCluster, Redis", "odo catalog search service", "odo catalog search service postgres", "Services available through Operators NAME CRDs postgresql-operator.v0.1.1 Backup, Database", "odo catalog describe service", "odo catalog describe service postgresql-operator.v0.1.1/Database", "KIND: Database VERSION: v1alpha1 DESCRIPTION: Database is the Schema for the the Database Database API FIELDS: awsAccessKeyId (string) AWS S3 accessKey/token ID Key ID of AWS S3 storage. Default Value: nil Required to create the Secret with the data to allow send the backup files to AWS S3 storage. [...]", "odo catalog describe service redis-operator.v0.8.0", "NAME: redis-operator.v0.8.0 DESCRIPTION: A Golang based redis operator that will make/oversee Redis standalone/cluster mode setup on top of the Kubernetes. It can create a redis cluster setup with best practices on Cloud as well as the Bare metal environment. Also, it provides an in-built monitoring capability using ... (cut short for beverity) Logging Operator is licensed under [Apache License, Version 2.0](https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/LICENSE) CRDs: NAME DESCRIPTION RedisCluster Redis Cluster Redis Redis", "odo create nodejs mynodejs", "odo create nodejs mynodejs --context ./node-backend", "odo create nodejs --app myapp --project backend", "odo catalog describe component nodejs", "odo create nodejs --starter nodejs-starter", "odo create mynodejs --devfile https://raw.githubusercontent.com/odo-devfiles/registry/master/devfiles/nodejs/devfile.yaml", "odo create ? Which devfile component type do you wish to create go ? What do you wish to name the new devfile component go-api ? What project do you want the devfile component to be created in default Devfile Object Validation [✓] Checking devfile existence [164258ns] [✓] Creating a devfile component from registry: DefaultDevfileRegistry [246051ns] Validation [✓] Validating if devfile name is correct [92255ns] ? Do you want to download a starter project Yes Starter Project [✓] Downloading starter project go-starter from https://github.com/devfile-samples/devfile-stack-go.git [429ms] Please use odo push command to create the component with source deployed", "odo delete", "odo delete --deploy", "odo delete --all", "schemaVersion: 2.2.0 [...] variables: CONTAINER_IMAGE: quay.io/phmartin/myimage commands: - id: build-image apply: component: outerloop-build - id: deployk8s apply: component: outerloop-deploy - id: deploy composite: commands: - build-image - deployk8s group: kind: deploy isDefault: true components: - name: outerloop-build image: imageName: \"{{CONTAINER_IMAGE}}\" dockerfile: uri: ./Dockerfile buildContext: USD{PROJECTS_ROOT} - name: outerloop-deploy kubernetes: inlined: | kind: Deployment apiVersion: apps/v1 metadata: name: my-component spec: replicas: 1 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: main image: {{CONTAINER_IMAGE}}", "odo list", "APP NAME PROJECT TYPE STATE MANAGED BY ODO app backend myproject spring Pushed Yes", "odo service list", "NAME MANAGED BY ODO STATE AGE PostgresCluster/hippo Yes (backend) Pushed 59m41s", "odo link PostgresCluster/hippo", "[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`", "odo url list", "Found the following URLs for component backend NAME STATE URL PORT SECURE KIND 8080-tcp Pushed http://8080-tcp.192.168.39.112.nip.io 8080 false ingress", "odo describe", "Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Environment Variables: · POSTGRESCLUSTER_PGBOUNCER-EMPTY · POSTGRESCLUSTER_PGBOUNCER.INI · POSTGRESCLUSTER_ROOT.CRT · POSTGRESCLUSTER_VERIFIER · POSTGRESCLUSTER_ID_ECDSA · POSTGRESCLUSTER_PGBOUNCER-VERIFIER · POSTGRESCLUSTER_TLS.CRT · POSTGRESCLUSTER_PGBOUNCER-URI · POSTGRESCLUSTER_PATRONI.CRT-COMBINED · POSTGRESCLUSTER_USER · pgImage · pgVersion · POSTGRESCLUSTER_CLUSTERIP · POSTGRESCLUSTER_HOST · POSTGRESCLUSTER_PGBACKREST_REPO.CONF · POSTGRESCLUSTER_PGBOUNCER-USERS.TXT · POSTGRESCLUSTER_SSH_CONFIG · POSTGRESCLUSTER_TLS.KEY · POSTGRESCLUSTER_CONFIG-HASH · POSTGRESCLUSTER_PASSWORD · POSTGRESCLUSTER_PATRONI.CA-ROOTS · POSTGRESCLUSTER_DBNAME · POSTGRESCLUSTER_PGBOUNCER-PASSWORD · POSTGRESCLUSTER_SSHD_CONFIG · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.KEY · POSTGRESCLUSTER_PGBACKREST_INSTANCE.CONF · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CA-ROOTS · POSTGRESCLUSTER_PGBOUNCER-HOST · POSTGRESCLUSTER_PORT · POSTGRESCLUSTER_ROOT.KEY · POSTGRESCLUSTER_SSH_KNOWN_HOSTS · POSTGRESCLUSTER_URI · POSTGRESCLUSTER_PATRONI.YAML · POSTGRESCLUSTER_DNS.CRT · POSTGRESCLUSTER_DNS.KEY · POSTGRESCLUSTER_ID_ECDSA.PUB · POSTGRESCLUSTER_PGBOUNCER-FRONTEND.CRT · POSTGRESCLUSTER_PGBOUNCER-PORT · POSTGRESCLUSTER_CA.CRT", "ls kubernetes odo-service-backend-postgrescluster-hippo.yaml odo-service-hippo.yaml", "odo unlink PostgresCluster/hippo", "[✓] Successfully unlinked component \"backend\" from service \"PostgresCluster/hippo\" To apply the changes, please use `odo push`", "ls kubernetes odo-service-hippo.yaml", "odo link PostgresCluster/hippo --inlined", "[✓] Successfully created link between component \"backend\" and service \"PostgresCluster/hippo\" To apply the link, please use `odo push`", "kubernetes: inlined: | apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: creationTimestamp: null name: backend-postgrescluster-hippo spec: application: group: apps name: backend-app resource: deployments version: v1 bindAsFiles: false detectBindingResources: true services: - group: postgres-operator.crunchydata.com id: hippo kind: PostgresCluster name: hippo version: v1beta1 status: secret: \"\" name: backend-postgrescluster-hippo", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}'", "odo exec -- env | grep pgVersion", "pgVersion=13", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}'", "odo exec -- env | grep -e \"pgVersion\\|pgImage\"", "pgVersion=13 pgImage=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0", "Linked Services: · PostgresCluster/hippo", "odo unlink PostgresCluster/hippo odo push", "odo link PostgresCluster/hippo --bind-as-files odo push", "odo describe Component Name: backend Type: spring Environment Variables: · PROJECTS_ROOT=/projects · PROJECT_SOURCE=/projects · DEBUG_PORT=5858 · SERVICE_BINDING_ROOT=/bindings · SERVICE_BINDING_ROOT=/bindings Storage: · m2 of size 3Gi mounted to /home/user/.m2 URLs: · http://8080-tcp.192.168.39.112.nip.io exposed via 8080 Linked Services: · PostgresCluster/hippo Files: · /bindings/backend-postgrescluster-hippo/pgbackrest_instance.conf · /bindings/backend-postgrescluster-hippo/user · /bindings/backend-postgrescluster-hippo/ssh_known_hosts · /bindings/backend-postgrescluster-hippo/clusterIP · /bindings/backend-postgrescluster-hippo/password · /bindings/backend-postgrescluster-hippo/patroni.yaml · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-host · /bindings/backend-postgrescluster-hippo/root.key · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.key · /bindings/backend-postgrescluster-hippo/pgbouncer.ini · /bindings/backend-postgrescluster-hippo/uri · /bindings/backend-postgrescluster-hippo/config-hash · /bindings/backend-postgrescluster-hippo/pgbouncer-empty · /bindings/backend-postgrescluster-hippo/port · /bindings/backend-postgrescluster-hippo/dns.crt · /bindings/backend-postgrescluster-hippo/pgbouncer-uri · /bindings/backend-postgrescluster-hippo/root.crt · /bindings/backend-postgrescluster-hippo/ssh_config · /bindings/backend-postgrescluster-hippo/dns.key · /bindings/backend-postgrescluster-hippo/host · /bindings/backend-postgrescluster-hippo/patroni.crt-combined · /bindings/backend-postgrescluster-hippo/pgbouncer-frontend.ca-roots · /bindings/backend-postgrescluster-hippo/tls.key · /bindings/backend-postgrescluster-hippo/verifier · /bindings/backend-postgrescluster-hippo/ca.crt · /bindings/backend-postgrescluster-hippo/dbname · /bindings/backend-postgrescluster-hippo/patroni.ca-roots · /bindings/backend-postgrescluster-hippo/pgbackrest_repo.conf · /bindings/backend-postgrescluster-hippo/pgbouncer-port · /bindings/backend-postgrescluster-hippo/pgbouncer-verifier · /bindings/backend-postgrescluster-hippo/id_ecdsa · /bindings/backend-postgrescluster-hippo/id_ecdsa.pub · /bindings/backend-postgrescluster-hippo/pgbouncer-password · /bindings/backend-postgrescluster-hippo/pgbouncer-users.txt · /bindings/backend-postgrescluster-hippo/sshd_config · /bindings/backend-postgrescluster-hippo/tls.crt", "odo exec -- cat /bindings/backend-postgrescluster-hippo/password", "q({JC:jn^mm/Bw}eu+j.GX{k", "odo exec -- cat /bindings/backend-postgrescluster-hippo/user", "hippo", "odo exec -- cat /bindings/backend-postgrescluster-hippo/clusterIP", "10.101.78.56", "odo link PostgresCluster/hippo --map pgVersion='{{ .hippo.spec.postgresVersion }}' --map pgImage='{{ .hippo.spec.image }}' --bind-as-files odo push", "odo exec -- cat /bindings/backend-postgrescluster-hippo/pgVersion", "13", "odo exec -- cat /bindings/backend-postgrescluster-hippo/pgImage", "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0", "odo registry list", "NAME URL SECURE DefaultDevfileRegistry https://registry.devfile.io No", "odo registry add", "odo registry add StageRegistry https://registry.stage.devfile.io New registry successfully added", "odo registry add MyRegistry https://myregistry.example.com --token <access_token> New registry successfully added", "odo registry delete", "odo registry delete StageRegistry ? Are you sure you want to delete registry \"StageRegistry\" Yes Successfully deleted registry", "odo registry update", "odo registry update MyRegistry https://otherregistry.example.com --token <other_access_token> ? Are you sure you want to update registry \"MyRegistry\" Yes Successfully updated registry", "odo service create", "odo catalog list services Services available through Operators NAME CRDs redis-operator.v0.8.0 RedisCluster, Redis odo service create redis-operator.v0.8.0/Redis my-redis-service Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat kubernetes/odo-service-my-redis-service.yaml", "apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "cat devfile.yaml", "[...] components: - kubernetes: uri: kubernetes/odo-service-my-redis-service.yaml name: my-redis-service [...]", "odo service create redis-operator.v0.8.0/Redis", "odo service create redis-operator.v0.8.0/Redis my-redis-service --inlined Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat devfile.yaml", "[...] components: - kubernetes: inlined: | apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 imagePullPolicy: IfNotPresent resources: limits: cpu: 101m memory: 128Mi requests: cpu: 101m memory: 128Mi serviceType: ClusterIP redisExporter: enabled: false image: quay.io/opstree/redis-exporter:1.0 storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: my-redis-service [...]", "odo service create redis-operator.v0.8.0/Redis my-redis-service -p kubernetesConfig.image=quay.io/opstree/redis:v6.2.5 -p kubernetesConfig.serviceType=ClusterIP -p redisExporter.image=quay.io/opstree/redis-exporter:1.0 Successfully added service to the configuration; do 'odo push' to create service on the cluster", "cat kubernetes/odo-service-my-redis-service.yaml", "apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0", "cat > my-redis.yaml <<EOF apiVersion: redis.redis.opstreelabs.in/v1beta1 kind: Redis metadata: name: my-redis-service spec: kubernetesConfig: image: quay.io/opstree/redis:v6.2.5 serviceType: ClusterIP redisExporter: image: quay.io/opstree/redis-exporter:1.0 EOF", "odo service create --from-file my-redis.yaml Successfully added service to the configuration; do 'odo push' to create service on the cluster", "odo service delete", "odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service Yes (api) Deleted locally 5m39s", "odo service delete Redis/my-redis-service ? Are you sure you want to delete Redis/my-redis-service Yes Service \"Redis/my-redis-service\" has been successfully deleted; do 'odo push' to delete service from the cluster", "odo service list", "odo service list NAME MANAGED BY ODO STATE AGE Redis/my-redis-service-1 Yes (api) Not pushed Redis/my-redis-service-2 Yes (api) Pushed 52s Redis/my-redis-service-3 Yes (api) Deleted locally 1m22s", "odo service describe", "odo service describe Redis/my-redis-service Version: redis.redis.opstreelabs.in/v1beta1 Kind: Redis Name: my-redis-service Parameters: NAME VALUE kubernetesConfig.image quay.io/opstree/redis:v6.2.5 kubernetesConfig.serviceType ClusterIP redisExporter.image quay.io/opstree/redis-exporter:1.0", "odo storage create", "odo storage create store --path /data --size 1Gi [✓] Added storage store to nodejs-project-ufyy odo storage create tempdir --path /tmp --size 2Gi --ephemeral [✓] Added storage tempdir to nodejs-project-ufyy Please use `odo push` command to make the storage accessible to the component", "odo storage list", "odo storage list The component 'nodejs-project-ufyy' has the following storage attached: NAME SIZE PATH STATE store 1Gi /data Not Pushed tempdir 2Gi /tmp Not Pushed", "odo storage delete", "odo storage delete store -f Deleted storage store from nodejs-project-ufyy Please use `odo push` command to delete the storage from the cluster", "components: - name: nodejs1 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi endpoints: - name: \"3000-tcp\" targetPort: 3000 mountSources: true - name: nodejs2 container: image: registry.access.redhat.com/ubi8/nodejs-12:1-36 memoryLimit: 1024Mi", "odo storage create --container", "odo storage create store --path /data --size 1Gi --container nodejs2 [✓] Added storage store to nodejs-testing-xnfg Please use `odo push` command to make the storage accessible to the component", "odo storage list", "The component 'nodejs-testing-xnfg' has the following storage attached: NAME SIZE PATH CONTAINER STATE store 1Gi /data nodejs2 Not Pushed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cli_tools/developer-cli-odo
Chapter 15. Kernel
Chapter 15. Kernel Kernel Media support The following features are presented as Technology Previews: The latest upstream video4linux Digital video broadcasting Primarily infrared remote control device support Various webcam support fixes and improvements Package: kernel Linux (NameSpace) Container [LXC] Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6 provides application level containers to separate and control the application resource usage policies through cgroups and namespaces. This release includes basic management of container life-cycle by allowing creation, editing and deletion of containers using the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Packages: libvirt , virt-manager Diagnostic pulse for the fence_ipmilan agent, BZ# 655764 A diagnostic pulse can now be issued on the IPMI interface using the fence_ipmilan agent. This new Technology Preview is used to force a kernel dump of a host if the host is configured to do so. Note that this feature is not a substitute for the off operation in a production cluster. Package: fence-agents
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/chap-red_hat_enterprise_linux-6.10_technical_notes-technology_previews-kernel
Appendix A. Reference Material
Appendix A. Reference Material A.1. Elytron subsystem components reference Table A.1. add-prefix-role-mapper Attributes Attribute Description prefix The prefix to add to each role. Table A.2. add-suffix-role-mapper Attributes Attribute Description suffix The suffix to add to each role. Table A.3. aggregate-http-server-mechanism-factory Attributes Attribute Description http-server-mechanism-factories The list of HTTP server factories to aggregate. Table A.4. aggregate-principal-decoder Attributes Attribute Description principal-decoders The list of principal decoders to aggregate. Table A.5. aggregate-principal-transformer Attributes Attribute Description principal-transformers The list of principal transformers to aggregate. Table A.6. aggregate-providers Attributes Attribute Description providers The list of referenced Provider[] resources to aggregate. Table A.7. aggregate-realm Attributes Attribute Description authentication-realm Reference to the security realm to use for authentication steps. This is used for obtaining or validating credentials. authorization-realm Reference to the security realm to use for loading the identity for authorization steps. authorization-realms Reference to the security realms to aggregate for loading the identity for authorization steps. For information about using multiple authorization realms, see Configure Authentication and Authorization Using Multiple Identity Stores in the How to Configure Identity Management guide. Note The authorization-realm and authorization-realms attributes are mutually exclusive. Define only one of the two attributes in a realm. Table A.8. aggregate-role-mapper Attributes Attribute Description role-mappers The list of role mappers to aggregate. Table A.9. aggregate-sasl-server-factory Attributes Attribute Description sasl-server-factories The list of SASL server factories to aggregate. Table A.10. authentication-configuration Attributes Attribute Description anonymous If true anonymous authentication is allowed. The default is false . authentication-name The authentication name to use. authorization-name The authorization name to use. credential-reference The credential to use for authentication. This can be in clear text or as a reference to a credential stored in a credential-store . extends An existing authentication configuration to extend. host The host to use. kerberos-security-factory Reference to a kerberos security factory used to obtain a GSS kerberos credential. mechanism-properties Configuration properties for the SASL authentication mechanism. port The port to use. protocol The protocol to use. realm The realm to use. sasl-mechanism-selector The SASL mechanism selector string. For more information about the grammar required for the sasl-mechanism-selector , see sasl-mechanism-selector Grammar in How to Configure Server Security for JBoss EAP. security-domain Reference to a security domain to obtain a forwarded identity. Table A.11. authentication-context Attributes Attribute Description extends An existing authentication context to extend. match-rules The rules to match against for this authentication context. Table A.12. authentication-context match-rules Attributes Attribute Description authentication-configuration Reference to the authentication configuration to use for a successful match. match-abstract-type The abstract type to match against. match-abstract-type-authority The abstract type authority to match against. match-host The host to match against. match-local-security-domain The local security domain to match against. match-no-user If true , rule will match against no user. match-path The patch to match against. match-port The port to match against. match-protocol The protocol to match against. match-urn The URN to match against. match-user The user to match against. ssl-context Reference to the ssl-context to use for a successful match. Table A.13. caching-realm Attributes Attribute Description maximum-age The time in milliseconds that an item can stay in the cache. A value of -1 keeps items indefinitely. This defaults to -1 . maximum-entries The maximum number of entries to keep in the cache. This defaults to 16 . realm A reference to a cacheable security realm such as jdbc-realm , ldap-realm , filesystem-realm or a custom security realm. Table A.14. case-principal-transformer attributes Attribute Description upper-case An optional attribute that converts a principal transformer's name to uppercase characters when set as true , which is the default setting. Set the attribute to false to convert the principal transformer's name to lowercase characters. Table A.15. certificate-authority-account Attributes Attribute Description alias The alias of certificate authority account key in the keystore. If the alias does not already exist in the keystore, a certificate authority account key will be automatically generated and stored as a PrivateKeyEntry under the alias. certificate-authority The name of the certificate authority to use. The default, and only allowed value, is LetsEncrypt . contact-urls A list of URLs that the certificate authority can contact about any issues related to this account. credential-reference The credential to be used when accessing the certificate authority account key. key-store The keystore that contains the certificate authority account key. Table A.16. chained-principal-transformer Attributes Attribute Description principal-transformers List of principal transformers to chain. Table A.17. client-ssl-context Attributes Attribute Description cipher-suite-filter The filter to apply to specify the enabled cipher suites. This filter takes a list of items delimited by colons, commas, or spaces. Each item may be a OpenSSL-style cipher suite name, a standard SSL/TLS cipher suite name, or a keyword such as TLSv1.2 or DES . A full list of keywords as well as additional details on creating a filter can be found in the Javadoc for the CipherSuiteSelector class. The default value is DEFAULT , which corresponds to all known cipher suites that do not have NULL encryption and excludes any cipher suites that have no authentication. key-manager Reference to the key-manager to use within the SSLContext . protocols The enabled protocols. Allowed options: SSLv2 , SSLv3 , TLSv1 , TLSv1.1 , TLSv1.2 , TLSv1.3 . This defaults to enabling TLSv1 , TLSv1.1 , TLSv1.2 , and TLSv1.3 . Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. provider-name The name of the provider to use. If not specified, all providers from providers will be passed to the SSLContext. providers The name of the providers to obtain the Provider[] to use to load the SSLContext . session-timeout The timeout for SSL sessions. trust-manager Reference to the trust-manager to use within the SSLContext . Table A.18. concatenating-principal-decoder Attributes Attribute Description joiner The string that will be used to join the values in the principal-decoders attribute. principal-decoders The list of principal decoders to concatenate. Table A.19. configurable-http-server-mechanism-factory Attributes Attribute Description filters The list of filters to be applied in order to enable or disable mechanisms based on the name. http-server-mechanism-factory Reference to the http server factory to be wrapped. properties Custom properties to be passed in to the HTTP server factory calls. Table A.20. configurable-http-server-mechanism-factory filters Attributes Attribute Description pattern-filter Filter based on a regular expression pattern. enabling If true the filter will be enabled if the mechanism matches. This defaults to true . Table A.21. configurable-sasl-server-factory Attributes Attribute Description filters List of filters to be evaluated sequentially and combined using or . properties Custom properties to be passed in to the SASL server factory calls. protocol The protocol passed into the factory when creating the mechanism. sasl-server-factory Reference to the SASL server factory to be wrapped. server-name The server name passed into the factory when creating the mechanism. Table A.22. configurable-sasl-server-factory filters Attributes Attribute Description enabling If true the filter will be enabled if the factory matches. This defaults to true . predefined-filter A predefined filter to use to filter the mechanism name. Allowed values are HASH_MD5 , HASH_SHA , HASH_SHA_256 , HASH_SHA_384 , HASH_SHA_512 , GS2 , SCRAM , DIGEST , IEC_ISO_9798 , EAP , MUTUAL , BINDING , and RECOMMENDED . pattern-filter A filter for the mechanism name based on a regular expression. Table A.23. constant-permission-mapper Attributes Attribute Description permission-sets The permission sets to assign in the event of a match. Permission sets can be used to assign permissions to an identity. permission-sets can take the following attribute: permission-set A reference to a permission set. Note The permissions attribute is deprecated, and is replaced by permission-sets . Table A.24. constant-principal-decoder Attributes Attribute Description constant The constant value the principal decoder will always return. Table A.25. constant-principal-transformer Attributes Attribute Description constant The constant value this principal transformer will always return. Table A.26. constant-realm-mapper Attributes Attribute Description realm-name Reference to the realm that will be returned. Table A.27. constant-role-mapper Attributes Attribute Description roles The list of roles that will be returned. Table A.28. credential-store Attributes Attribute Description create Specifies whether the credential store should create storage when it does not exist. The default values is false . credential-reference The reference to the credential used to create protection parameter. This can be in clear text or as a reference to a credential stored in a credential-store . implementation-properties Map of credentials store implementation-specific properties. modifiable Whether you can modify the credential store. The default value is true . other-providers The name of the providers to obtain the providers to search for the one that can create the required Jakarta Connectors objects within the credential store. This is valid only for keystore-based credential store. If this is not specified, then the global list of providers is used instead. path The file name of the credential store. provider-name The name of the provider to use to instantiate the CredentialStoreSpi . If the provider is not specified, then the first provider found that can create an instance of the specified type will be used. providers The name of the providers to obtain the providers to search for the one that can create the required credential store type. If this is not specified, then the global list of providers is used instead. relative-to The base path this credential store path is relative to. type Type of the credential store, for example, KeyStoreCredentialStore . Table A.29. credential-store alias Attribute Description entry-type Type of credential entry stored in the credential store. secret-value Secret value such as password. Table A.30. credential-store KeyStoreCredentialStore implementation properties Attribute Description cryptoAlg Cryptographic algorithm name to be used to encrypt decrypt entries at external storage. This attribute is only valid if external is enabled. Defaults to AES . external Whether data is stored to external storage and encrypted by the keyAlias . Defaults to false . externalPath Specifies path to external storage. This attribute is only valid if external is enabled. keyAlias The secret key alias within the credential store that is used to encrypt or decrypt data to the external storage. keyStoreType The keystore type, such as PKCS11 . Defaults to KeyStore.getDefaultType() . Table A.31. custom-credential-security-factory Attributes Attribute Description class-name The class name of the implementation of the custom security factory. configuration The optional key and value configuration for the custom security factory. module The module to use to load the custom security factory. Table A.32. custom-modifiable-realm Attributes Attribute Description class-name The class name of the implementation of the custom realm. configuration The optional key and value configuration for the custom realm. module The module to use to load the custom realm. Table A.33. custom-permission-mapper Attributes Attribute Description class-name Fully qualified class name of the permission mapper. configuration The optional key and value configuration for the permission mapper. module Name of the module to use to load the permission mapper. Table A.34. custom-principal-decoder Attributes Attribute Description class-name Fully qualified class name of the principal decoder. configuration The optional key and value configuration for the principal decoder. module Name of the module to use to load the principal decoder. Table A.35. custom-principal-transformer Attributes Attribute Description class-name Fully qualified class name of the principal transformer. configuration The optional key and value configuration for the principal transformer. module Name of the module to use to load the principal transformer. Table A.36. custom-realm Attributes Attribute Description class-name Fully qualified class name of the custom realm. configuration The optional key and value configuration for the custom realm. module Name of the module to use to load the custom realm. Table A.37. custom-realm-mapper Attributes Attribute Description class-name Fully qualified class name of the realm mapper. configuration The optional key and value configuration for the realm mapper. module Name of the module to use to load the realm mapper. Table A.38. custom-role-decoder Attributes Attribute Description class-name Fully qualified class name of the role decoder. configuration The optional key and value configuration for the role decoder. module Name of the module to use to load the role decoder. Table A.39. custom-role-mapper Attributes Attribute Description class-name Fully qualified class name of the role mapper. configuration The optional key and value configuration for the role mapper. module Name of the module to use to load the role mapper. Table A.40. dir-context Attributes Attribute Description authentication-context The authentication context to obtain login credentials to connect to the LDAP server. Can be omitted if authentication-level is none , which is equivalent to anonymous authentication. authentication-level The authentication level, meaning security level or authentication mechanism, to use. Corresponds to SECURITY_AUTHENTICATION or java.naming.security.authentication environment property. Allowed values are none , simple and sasl_mech format. The sasl_mech format is a space-separated list of SASL mechanism names. connection-timeout The timeout for connecting to the LDAP server in milliseconds. credential-reference The credential reference to authenticate and connect to the LDAP server. This can be omitted if authentication-level is none , which is equivalent to anonymous authentication. enable-connection-pooling If true connection pooling is enabled. This defaults to false . module Name of module that will be used as the class loading base. principal The principal to authenticate and connect to the LDAP server. This can be omitted if authentication-level is none which is equivalent to anonymous authentication. properties The additional connection properties for the DirContext . read-timeout The read timeout for an LDAP operation in milliseconds. referral-mode The mode used to determine if referrals should be followed. Allowed values are FOLLOW , IGNORE , and THROW . This defaults to IGNORE . ssl-context The name of the SSL context used to secure connection to the LDAP server. url The connection URL. Table A.41. expression=encryption Attributes Attribute Description default-resolver Optional attribute. The resolver to use when an encrypted expression is defined without one. For example if you set "exampleResolver" as the default-resolver and you create an encrypted expression with the command /subsystem=elytron/expression=encryption:create-expression(clear-text=TestPassword) , Elytron uses "exampleResolver" as the resolver for this encrypted expression. prefix The prefix to use within an encrypted expression. Default is ENC . This attribute is provided for those cases where ENC might already be defined. You shouldn't change this value unless it conflicts with an already defined ENC prefix. resolvers A list of defined resolvers. A resolver has the following attributes: name - The name of the individual configuration used to reference it. credential-store - Reference to the credential store instance that contains the secret key this resolver uses. secret-key - The alias of the secret key Elytron should use from within a given credential store. Table A.42. filesystem-realm Attributes Attribute Description encoded Whether the identity names should be stored encoded (Base32) in file names. levels The number of levels of directory hashing to apply. The default value is 2 . path The path to the file containing the realm. relative-to The predefined relative path to use with path . For example jboss.server.config.dir . Table A.43. filtering-key-store Attributes Attribute Description alias-filter A filter to apply to the aliases returned from the key-store . It can either be a comma-separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 Note The alias-filter attribute is case sensitive. Because the use of mixed-case or uppercase aliases, such as elytronAppServer , might not be recognized by some keystore providers, it is recommended to use lowercase aliases, such as elytronappserver . key-store Reference to the key-store to filter. Table A.44. generate-key-pair attributes Attribute Description algorithm Specifies the encryption algorithm, such as RSA, DSA, or EC. The default value is RSA. size Specifies the size of the private key in bits. The default size values in bits for the key pair types are as follows: RSA is 2048 ; DSA is 2048 ; and EC is 256 . Table A.45. http-authentication-factory Attributes Attribute Description http-server-mechanism-factory The HttpServerAuthenticationMechanismFactory to associate with this resource. mechanism-configurations The list of mechanism-specific configurations. security-domain The security domain to associate with this resource. Table A.46. http-authentication-factory mechanism-configurations Attributes Attribute Description credential-security-factory The security factory to use to obtain a credential as required by the mechanism. final-principal-transformer A final principal transformer to apply for this mechanism realm. host-name The host name this configuration applies to. mechanism-name This configuration will only apply where a mechanism with the name specified is used. If this attribute is omitted then this will match any mechanism name. mechanism-realm-configurations The list of definitions of the realm names as understood by the mechanism. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. post-realm-principal-transformer A principal transformer to apply after the realm is selected. protocol The protocol this configuration applies to. realm-mapper The realm mapper to be used by the mechanism. Table A.47. http-authentication-factory mechanism-configurations mechanism-realm-configurations Attributes Attribute Description final-principal-transformer A final principal transformer to apply for this mechanism realm. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. realm-name The name of the realm to be presented by the mechanism. Table A.48. identity-realm Attributes Attribute Description attribute-name The name of the attribute associated with this identity. attribute-values The list of values associated with the identities attribute. identity The identity available from the security realm. Table A.49. import-key-pair attributes Attribute Description key-passphrase Optional attribute. Sets the passphrase to decrypt the private key. private-key-location The path to a file containing a private key. Only specify if you have not already specified the private-key-string attribute. private-key-string Sets the private key as a string. Only specify if you have not already specified the private-key-location attribute. public-key-location Required if private key is in any format other than OpenSSH. The path to a file containing a public key. Only specify if you have not already specified the public-key-string attribute. public-key-string Required if private key is in any format other than OpenSSH. Sets the public key as a string. Only specify if you have not already specified the public-key-location attribute. Table A.50. jaspi-configuration Attributes Attribute Description application-context Used when registering this configuration with the AuthConfigFactory . Can be omitted to allow wildcard matching. description Is used to provide a description to the AuthConfigFactory . layer Used when registering this configuration with the AuthConfigFactory . Can be omitted to allow wildcard matching. name A name that allows the resource to be referenced in the management model. Table A.51. jaspi-configuration server-auth-module Attributes Attribute Description class-name The fully qualified class name of the ServerAuthModule . flag The control flag to indicate how this module operates in relation to the other modules. module The module to load the ServerAuthModule from. options Configuration options to be passed into the ServerAuthModule on initialization. Table A.52. jdbc-realm Attributes Attribute Description principal-query The list of authentication queries used to authenticate users based on specific key types. Table A.53. jdbc-realm principal-query Attributes Attribute Description attribute-mapping The list of attribute mappings defined for this resource. bcrypt-mapper A key mapper that maps a column returned from a SQL query to a Bcrypt key type. clear-password-mapper A key mapper that maps a column returned from a SQL query to a clear password key type. This has a password-index child element that is the column index from an authentication query that represents the user password. data-source The name of the datasource used to connect to the database. salted-simple-digest-mapper A key mapper that maps a column returned from a SQL query to a Salted Simple Digest key type. scram-mapper A key mapper that maps a column returned from a SQL query to a SCRAM key type. simple-digest-mapper A key mapper that maps a column returned from a SQL query to a Simple Digest key type. sql The SQL statement used to obtain the keys as table columns for a specific user and map them accordingly with their type. Table A.54. jdbc-realm principal-query attribute-mapping Attributes Attribute Description index The column index from a query that representing the mapped attribute. to The name of the identity attribute mapped from a column returned from a SQL query. Table A.55. jdbc-realm principal-query bcrypt-mapper Attributes Attribute Description iteration-count-index The column index from an authentication query that represents the password's iteration count, if supported. password-index The column index from an authentication query that represents the user password. salt-index The column index from an authentication query that represents the password's salt, if supported. Table A.56. jdbc-realm principal-query salted-simple-digest-mapper Attributes Attribute Description algorithm The algorithm for a specific password key mapper. Allowed values are password-salt-digest-md5 , password-salt-digest-sha-1 , password-salt-digest-sha-256 , password-salt-digest-sha-384 , password-salt-digest-sha-512 , salt-password-digest-md5 , salt-password-digest-sha-1 , salt-password-digest-sha-256 , salt-password-digest-sha-384 , and salt-password-digest-sha-512 . The default is password-salt-digest-md5 . password-index The column index from an authentication query that represents the user password. salt-index The column index from an authentication query that represents the password's salt, if supported. Table A.57. jdbc-realm principal-query simple-digest-mapper Attributes Attribute Description algorithm The algorithm for a specific password key mapper. Allowed values are simple-digest-md2 , simple-digest-md5 , simple-digest-sha-1 , simple-digest-sha-256 , simple-digest-sha-384 , and simple-digest-sha-512 . The default is simple-digest-md5 . password-index The column index from an authentication query that represents the user password. Table A.58. jdbc-realm principal-query scram-mapper Attributes Attribute Description algorithm The algorithm for a specific password key mapper. The allowed values are scram-sha-1 and scram-sha-256 . The default value is scram-sha-256 . iteration-count-index The column index from an authentication query that represents the password's iteration count, if supported. password-index The column index from an authentication query that represents the user password. salt-index The column index from an authentication query that represents the password's salt, if supported. Table A.59. kerberos-security-factory Attributes Attribute Description debug If true the JAAS step of obtaining the credential will have debug logging enabled. Defaults to false . mechanism-names The mechanism names the credential should be usable with. Names will be converted to OIDs and used together with OIDs from mechanism-oids attribute. mechanism-oids The list of mechanism OIDs the credential should be usable with. minimum-remaining-lifetime The amount of time in seconds a cached credential can have before it is recreated. obtain-kerberos-ticket Should the KerberosTicket also be obtained and associated with the credential. This is required to be true where credentials are delegated to the server. options The Krb5LoginModule additional options. path The path of the keytab to load to obtain the credential. principal The principal represented by the keytab. relative-to The relative path to the keytab. request-lifetime How much lifetime should be requested for newly created credentials. required Whether the keytab file with an adequate principal is required to exist at the time the service starts. server If true this factory is used for the server-side portion of Kerberos authentication. If false it is used for the client-side. Defaults to true wrap-gss-credential Whether generated GSS credentials should be wrapped to prevent improper disposal. Table A.60. key-manager Attributes Attribute Description algorithm The name of the algorithm to use to create the underlying KeyManagerFactory . This is provided by the JDK. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. More details on SunJSSE can be found in the Java Secure Socket Extension (JSSE) Reference Guide . alias-filter A filter to apply to the aliases returned from the keystore. This can either be a comma-separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 credential-reference The credential reference to decrypt keystore item. This can be specified in clear text or as a reference to a credential stored in a credential-store . This is not a password of the keystore. key-store Reference to the key-store to use to initialize the underlying KeyManagerFactory . provider-name The name of the provider to use to create the underlying KeyManagerFactory . providers Reference to obtain the Provider[] to use when creating the underlying KeyManagerFactory . Table A.61. key-store Attributes Attribute Description alias-filter A filter to apply to the aliases returned from the keystore, can either be a comma separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 Note The alias-filter attribute is case sensitive. Because the use of mixed-case or uppercase aliases, such as elytronAppServer , might not be recognized by some keystore providers, it is recommended to use lowercase aliases, such as elytronappserver . credential-reference The password to use to access the keystore. This can be specified in clear text or as a reference to a credential stored in a credential-store . path The path to the keystore file. provider-name The name of the provider to use to load the keystore. Setting this attribute disables searching for the first provider that can create a keystore of the specified type. providers A reference to the providers that should be used to obtain the list of provider instances to search. If not specified, the global list of providers will be used instead. relative-to The base path this store is relative to. This can be a full path or predefined path such as jboss.server.config.dir . required If true the keystore file referenced is required to exist at the time the keystore service starts. The default value is false . type The type of the keystore, for example, JKS . Note The following keystore types are detected automatically: JKS JCEKS PKCS12 BKS BCFKS UBER You must manually specify the other keystore types. A full list of keystore types can be found in the Java Cryptography Architecture Standard Algorithm Name Documentation for JDK 8 . Table A.62. key-store-realm Attributes Attribute Description key-store Reference to the keystore used to back this security realm. Table A.63. ldap-key-store Attributes Attribute Description alias-attribute The name of LDAP attribute where the item alias will be stored. certificate-attribute The name of LDAP attribute where the certificate will be stored. certificate-chain-attribute The name of LDAP attribute where the certificate chain will be stored. certificate-chain-encoding The encoding of the certificate chain. certificate-type The type of the certificate. dir-context The name of the dir-context which will be used to communication with LDAP server. filter-alias The LDAP filter for obtaining an item in the keystore by alias. filter-certificate The LDAP filter for obtaining an item in the keystore by certificate. filter-iterate The LDAP filter for iterating over all items of the keystore. key-attribute The name of LDAP attribute where the key will be stored. key-type The type of keystore that is stored in a serialized manner in the LDAP attribute. For example, JKS . A full list of keystore types can be found in the Java Cryptography Architecture Standard Algorithm Name Documentation for JDK 8 . new-item-template Configuration for item creation. This defines how the LDAP entry of newly created keystore item will look. search-path The path in LDAP where the keystore items will be searched. search-recursive If the LDAP search should be recursive. search-time-limit The time limit in milliseconds for obtaining keystore items from LDAP. Defaults to 10000 . Table A.64. ldap-key-store new-item-template Attributes Attribute Description new-item-attributes The LDAP attributes which will be set for newly created items. This takes a list of items with name and value pairs. new-item-path The path in LDAP where the newly created keystore items will be stored. new-item-rdn The name of LDAP RDN for the newly created items. Table A.65. ldap-realm Attributes Attribute Description allow-blank-password Whether this realm supports blank password direct verification. A blank password attempt will be rejected otherwise. dir-context The name of the dir-context which will be used to connect to the LDAP server. direct-verification If true this realm supports verification of credentials by directly connecting to LDAP as the account being authenticated; otherwise, the password is retrieved from the LDAP server and verified in JBoss EAP. If enabled, the JBoss EAP server must be able to obtain the plain user password from the client, which requires either the PLAIN SASL or BASIC HTTP mechanism be used for authentication. Defaults to false . identity-mapping The configuration options that define how principals are mapped to their corresponding entries in the underlying LDAP server. Table A.66. ldap-realm identity-mapping Attributes Attribute Description attribute-mapping List of attribute mappings defined for this resource. filter-name The LDAP filter for getting identity by name. iterator-filter The LDAP filter for iterating over identities of the realm. new-identity-attributes The list of attributes of newly created identities and is required for modifiability of the realm. This is a list of name and value pair objects. otp-credential-mapper The credential mapping for OTP credential. new-identity-parent-dn The DN of parent of newly created identities. Required for modifiability of the realm. rdn-identifier The RDN part of the principal's DN to be used to obtain the principal's name from an LDAP entry. This is also used when creating new identities. search-base-dn The base DN to search for identities. use-recursive-search If true identity search queries are recursive. Defaults to false . user-password-mapper The credential mapping for a credential similar to userPassword. x509-credential-mapper The configuration allowing to use LDAP as storage of X509 credentials. If none of the -from child attributes are defined, then this configuration will be ignored. If more than one -from child attribute is defined, then the user certificate must match all the defined criteria. Table A.67. ldap-realm identity-mapping attribute-mapping Attributes Attribute Description extract-rdn The RDN key to use as the value for an attribute, in case the value in its raw form is in X.500 format. filter The filter to use to obtain the values for a specific attribute. filter-base-dn The name of the context where the filter should be performed. from The name of the LDAP attribute to map to an identity attribute. If not defined, DN of entry is used. reference The name of LDAP attribute containing DN of entry to obtain value from. role-recursion Maximum depth for recursive role assignment. Use 0 to specify no recursion. Defaults to 0 . role-recursion-name Determine the LDAP attribute of role entry which will be a substitute for "{0}" in filter-name when searching roles of role. search-recursive If true attribute LDAP search queries are recursive. Defaults to true . to The name of the identity attribute mapped from a specific LDAP attribute. If not provided, the name of the attribute is the same as define in from . If the from is not defined too, value dn is used. Table A.68. ldap-realm identity-mapping user-password-mapper Attributes Attribute Description from The name of the LDAP attribute to map to an identity attribute. If not defined, DN of entry is used. verifiable If true password can be used to verify the user. Defaults to true . writable If true password can be changed. Defaults to false . Table A.69. ldap-realm identity-mapping otp-credential-mapper Attributes Attribute Description algorithm-from The name of the LDAP attribute of OTP algorithm. hash-from The name of the LDAP attribute of OTP hash function. seed-from The name of the LDAP attribute of OTP seed. sequence-from The name of the LDAP attribute of OTP sequence number. Table A.70. ldap-realm identity-mapping x509-credential-mapper Attributes Attribute Description certificate-from The name of the LDAP attribute to map to an encoded user certificate. If not defined, encoded certificate will not be checked. digest-algorithm The digest algorithm, which is the hash function, used to compute digest of the user certificate. Will be used only if digest-from has been defined. digest-from The name of the LDAP attribute to map to a user certificate digest. If not defined, certificate digest will not be checked. serial-number-from The name of the LDAP attribute to map to a serial number of user certificate. If not defined, serial number will not be checked. subject-dn-from The name of the LDAP attribute to map to a subject DN of user certificate. If not defined, subject DN will not be checked. Table A.71. logical-permission-mapper Attributes Attribute Description left Reference to the permission mapper to use to the left of the operation. logical-operation The logical operation to use to combine the permission mappers. Allowed values are and , or , xor , and unless . right Reference to the permission mapper to use to the right of the operation. Table A.72. logical-role-mapper Attributes Attribute Description left Reference to a role mapper to be used on the left side of the operation. logical-operation The logical operation to be performed on the role mapper mappings. Allowed values are: and , minus , or , and xor . right Reference to a role mapper to be used on the right side of the operation. Table A.73. mapped-regex-realm-mapper Attributes Attribute Description delegate-realm-mapper The realm mapper to delegate to if there is no match using the pattern. pattern The regular expression which must contain at least one capture group to extract the realm from the name. realm-map Mapping of realm name extracted using the regular expression to a defined realm name. Table A.74. mechanism-provider-filtering-sasl-server-factory Attributes Attribute Description enabling If true no provider loaded mechanisms are enabled unless matched by one of the filters. This defaults to true . filters The list of filters to apply when comparing the mechanisms from the providers. A filter matches when all of the specified values match the mechanism and provider pair. sasl-server-factory Reference to a SASL server factory to be wrapped by this definition. Table A.75. mechanism-provider-filtering-sasl-server-factory filters Attributes Attribute Description mechanism-name The name of the SASL mechanism this filter matches with. provider-name The name of the provider this filter matches. provider-version The version to use when comparing the provider's version. version-comparison The equality to use when evaluating the Provider's version. The allowed values are less-than and greater-than . The default value is less-than . Table A.76. online-certificate-status-protocol Attributes Attribute Description responder Override the OCSP Responder URI resolved from the certificate. responder-certificate Alias for responder certificate located in responder-keystore or trust-manager keystore if responder-keystore is not defined. responder-keystore Alternative keystore for responder certificate. responder-certificate must be defined. prefer-crls When both OCSP and CRL mechanisms are configured, OCSP mechanism is called first. When prefer-crls is set to true , the CRL mechanism is called first. Table A.77. permission-set permission Attributes Attribute Description action The action to pass to the permission as it is constructed. class-name The fully qualified class name of the permission. module The module to use to load the permission. target-name The target name to pass to the permission as it is constructed. Table A.78. periodic-rotating-file-audit-log Attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the value of the synchronized is the default value. format Use SIMPLE for human readable text format, or JSON for storing individual events in JSON . path Defines the location of the log files. relative-to Optional attribute. Defines the location of the log files. suffix Optional attribute. Adds a date suffix to a rotated log. You must use the java.time.format.DateTimeFormatter format. For example .yyyy-MM-dd . synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. Table A.79. properties-realm Attributes Attribute Description groups-attribute The name of the attribute in the returned AuthorizationIdentity that should contain the group membership information for the identity. groups-properties The properties file containing the users and their groups. users-properties The properties file containing the users and their passwords. Table A.80. properties-realm users-properties Attributes Attribute Description digest-realm-name The default realm name to use for digested passwords if one is not discovered in the properties file. path The path to the file containing the users and their passwords. The file should contain realm name declaration. plain-text If true the passwords in properties file stored in plain text. If false they are pre-hashed, taking the form of HEX( MD5( username \":\" realm \":\" password))) . Defaults to false . relative-to The predefined path the path is relative to. Table A.81. properties-realm groups-properties Attributes Attribute Description path The path to the file containing the users and their groups. relative-to The predefined path the path is relative to. Table A.82. provider-http-server-mechanism-factory Attributes providers The providers to use to locate the factories. If not specified, the globally registered list of providers will be used. Table A.83. provider-loader Attributes Attribute Description argument An argument to be passed into the constructor as the Provider is instantiated. class-names The list of the fully qualified class names of providers to load. These are loaded after the service-loader discovered providers, and any duplicates will be skipped. configuration The key and value configuration to be passed to the provider to initialize it. module The name of the module to load the provider from. path The path of the file to use to initialize the providers. relative-to The base path of the configuration file. Table A.84. provider-sasl-server-factory Attributes Attribute Description providers The providers to use to locate the factories. If not specified, the globally registered list of providers will be used. Table A.85. regex-principal-transformer Attributes Attribute Description pattern The regular expression to use to locate the portion of the name to be replaced. replace-all If true all occurrences of the pattern matched are replaced. If false only the first occurrence. is replaced. Defaults to false . replacement The value to be used as the replacement. Table A.86. regex-role-mapper Attributes Attribute Description pattern The regular expression to use to match roles. You can use group capturing if you want to use a portion of the original role in the replacement. For example, to capture a string after a hyphen in roles such as "app-admin", "batch-admin", use the pattern .*-([a-z]*)USD . replacement The string to replace the match. You can use a fixed string or refer to captured groups from the regular expression specified in the pattern attribute. For the example, in the pattern above, you can use USD1 to refer to the first captured group - which is the string "admin", for the roles "app-admin" and "batch-admin". keep-non-mapped Set the value to true to preserve the roles that do not match the regular expression specified in the pattern attribute. Table A.87. regex-validating-principal-transformer Attributes Attribute Description match If true the name must match the given pattern to make validation successful. If false the name must not match the given pattern to make validation successful. This defaults to true . pattern The regular expression to use for the principal transformer. Table A.88. sasl-authentication-factory Attributes Attribute Description mechanism-configurations The list of mechanism specific configurations. sasl-server-factory The SASL server factory to associate with this resource. security-domain The security domain to associate with this resource. Table A.89. sasl-authentication-factory mechanism-configurations Attributes Attribute Description credential-security-factory The security factory to use to obtain a credential as required by the mechanism. final-principal-transformer A final principal transformer to apply for this mechanism realm. host-name The host name this configuration applies to. mechanism-name This configuration will only apply where a mechanism with the name specified is used. If this attribute is omitted then this will match any mechanism name. mechanism-realm-configurations The list of definitions of the realm names as understood by the mechanism. protocol The protocol this configuration applies to. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. Table A.90. sasl-authentication-factory mechanism-configurations mechanism-realm-configurations Attributes Attribute Description final-principal-transformer A final principal transformer to apply for this mechanism realm. post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. realm-mapper The realm mapper to be used by the mechanism. realm-name The name of the realm to be presented by the mechanism. Table A.91. secret-key-credential-store Attributes Attribute Description create Set the value to false if you do not want ELytron to create one if it doesn't already exist. Defaults to true . default-alias The alias name for a key generated by default. The default value is key . key-size The size of a generated key. The default size is 256 bits. You can set the value to one of the following: 128 192 256 path The path to the credential store. populate If a credential store does not contain a default-alias , this attribute indicates whether Elytron should create one. The default is true . relative-to A reference to a previously defined path that the attribute path is relative to. Table A.92. server-ssl-context Attributes Attribute Description authentication-optional If true rejecting of the client certificate by the security domain will not prevent the connection. This allows a fall through to use other authentication mechanisms, such as form login, when the client certificate is rejected by security domain. This has an effect only when the security domain is set. This defaults to false . cipher-suite-filter The filter to apply to specify the enabled cipher suites. This filter takes a list of items delimited by colons, commas, or spaces. Each item may be an OpenSSL-style cipher suite name, a standard SSL/TLS cipher suite name, or a keyword such as TLSv1.2 or DES . A full list of keywords as well as additional details on creating a filter can be found in the Javadoc for the CipherSuiteSelector class. The default value is DEFAULT , which corresponds to all known cipher suites that do not have NULL encryption and excludes any cipher suites that have no authentication. final-principal-transformer A final principal transformer to apply for this mechanism realm. key-manager Reference to the key managers to use within the SSLContext . maximum-session-cache-size The maximum number of SSL/TLS sessions to be cached. need-client-auth If true a client certificate is required on SSL handshake. Connection without trusted client certificate will be rejected. This defaults to false . post-realm-principal-transformer A principal transformer to apply after the realm is selected. pre-realm-principal-transformer A principal transformer to apply before the realm is selected. protocols The enabled protocols. Allowed options are SSLv2 , SSLv3 , TLSv1 , TLSv1.1 , TLSv1.2 , TLSv1.3 . This defaults to enabling TLSv1 , TLSv1.1 , TLSv1.2 , and TLSv1.3 . Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. provider-name The name of the provider to use. If not specified, all providers from providers will be passed to the SSLContext . providers The name of the providers to obtain the Provider[] to use to load the SSLContext . realm-mapper The realm mapper to be used for SSL authentication. security-domain The security domain to use for authentication during SSL/TLS session establishment. session-timeout The timeout for SSL/TLS sessions. trust-manager Reference to the trust-manager to use within the SSLContext. use-cipher-suites-order If true the cipher suites order defined on the server will be used. If false the cipher suites order presented by the client will be used. Defaults to true . want-client-auth If true a client certificate will be requested, but not required, on SSL handshake. If a security domain is referenced and supports X509 evidence, this will be set to true automatically. This is ignored when need-client-auth is set. This defaults to false . wrap If true , the returned SSLEngine , SSLSocket , and SSLServerSocket instances will be wrapped to protect against further modification. This defaults to false . Note The realm mapper and principal transformer attributes for a server-ssl-context apply only for the SASL EXTERNAL mechanism, where the certificate is verified by the trust manager. HTTP CLIENT-CERT authentication settings are configured in an http-authentication-factory . Table A.93. service-loader-http-server-mechanism-factory Attributes Attribute Description module The module to use to obtain the class loader to load the factories. If not specified the class loader to load the resource will be used instead. Table A.94. service-loader-sasl-server-factory Attributes Attribute Description module The module to use to obtain the class loader to load the factories. If not specified the class loader to load the resource will be used instead. Table A.95. simple-permission-mapper Attributes Attribute Description mapping-mode The mapping mode that should be used in the event of multiple matches. Allowed values are , and , or , xor , unless , and first . The default is first . permission-mappings The list of defined permission mappings. Table A.96. simple-permission-mapper permission-mappings Attributes Attribute Description permission-sets The permission sets to assign in the event of a match. Permission sets can be used to assign permissions to an identity. permission-sets can take the following attribute: permission-set A reference to a permission set. Important The permissions attribute is deprecated, and is replaced by permission-sets . principals The list of principals to compare when mapping permissions, if the identities principal matches any one in the list it is a match. roles The list of roles to compare when mapping permissions, if the identity is a member of any one in the list it is a match. Table A.97. simple-regex-realm-mapper Attributes Attribute Description delegate-realm-mapper The realm mapper to delegate to if there is no match using the pattern. pattern The regular expression which must contain at least one capture group to extract the realm from the name. Table A.98. simple-role-decoder Attributes Attribute Description attribute The name of the attribute from the identity to map directly to roles. Table A.99. source-address-role-decoder Attributes Attribute Description pattern A regular expression that specifies the IP address of a client or the IP addresses of clients to match. source-address Specifies the IP address of the client. roles Provides the list of roles to assign to a user if the IP address of the client matches the values specified in the pattern attribute or the source-address attribute. Note You must specify at least one IP address in either the source-address attribute or the pattern attribute. Otherwise, you cannot make authorization decisions based on the IP address of a client. Table A.100. syslog-audit-log Attributes Attribute Description format The format that audit events should be recorded in. Supported values: JSON SIMPLE Default value: SIMPLE host-name The host name to be be embedded into all events sent to the syslog server. port The listening port on the syslog server. reconnect-attempts The maximum number of times that Elytron will attempt to send successive messages to a syslog server before closing the connection. The value of this attribute is only valid when the transmission protocol used is UDP. Supported values: Any positive integer value. -1 indicates infinite reconnect attempts. Default value: 0 server-address IP address of the syslog server, or a name that can be resolved by Java's InetAddress.getByName() method. ssl-context The SSL context to use when connecting to the syslog server. This attribute is only required if transport is set to SSL_TCP . syslog-format The RFC format to be used for describing the audit event. Supported values: RFC3164 RFC5424 Default value: RFC5424 transport The transport layer protocol to use to connect to the syslog server. Supported values: SSL_TCP TCP UDP Default value: TCP Table A.101. File audit logger attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the value of the synchronized is the default value. format Default value is SIMPLE . Use SIMPLE for human readable text format, or JSON for storing individual events in JSON . path Defines the location of the log files relative-to Optional attribute. Defines the location of the log files synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. Table A.102. Size rotating file audit logging attributes Attribute Description autoflush Specifies if the output stream requires flushing after every audit event. If you do not define the attribute, the value of the synchronized is the default value. format Default value is SIMPLE . Use SIMPLE for human readable text format, or JSON for storing individual events in JSON . max-backup-index The maximum number of files to back up when rotating. The default value is 1 . path Defines the location of the log files. relative-to Optional attribute. Defines the location of the log files. rotate-on-boot By default, Elytron does not create a new log file when you restart a server. Set this attribute to true to rotate the log on server restart. rotate-size The maximum size that the log file can reach before Elytron rotates the log. The default is 10m for 10 megabytes. You can also define the maximum size of the log with k, g, b, or t units. You can specify units in either uppercase or lowercase characters. suffix Optional attribute. Adds a date suffix to a rotated log. You must use the java.text.format.DateTimeFormatter format. For example .yyyy-MM-dd-HH . synchronized Default value is true . Specifies that the file descriptor gets synchronized after every audit event. Table A.103. token-realm Attributes Attribute Description jwt A token validator to be used in conjunction with a token-based realm that handles security tokens based on the JWT/JWS standard. oauth2-introspection A token validator to be used in conjunction with a token-based realm that handles OAuth2 Access Tokens and validates them using an endpoint compliant with the RFC-7662 OAuth2 Token Introspection specification. principal-claim The name of the claim that should be used to obtain the principal's name. The default is username . Table A.104. token-realm jwt Attributes Attribute Description audience A list of strings representing the audiences supported by this configuration. During validation JWT tokens must have an aud claim that contains one of the values defined here. certificate The name of the certificate with a public key to load from the keystore that is defined by the key-store attribute. client-ssl-context The SSL context to use for a remote JSON Web Key (JWK) . This enables you to use the URL from the jku (JSON Key URL) header parameter to fetch public keys for token verification. host-name-verification-policy A policy that defines how host names should be verified when using remote JSON Web Keys. You can set either of the following values for the attribute: ANY , which disables hostname verification. DEFAULT , which denies connections where a certificate mismatch occurs between a server name from the certificate and the connecting host. issuer A list of strings representing the issuers supported by this configuration. During validation JWT tokens must have an iss claim that contains one of the values defined here. key-store The keystore from which the certificate with a public key should be loaded. This attribute, along with the certificate attribute, can also be used as an alternative to the public-key . public-key A public key in PEM Format. During validation, if a public key is provided, the signature will be verified based on the key value provided by this attribute. Alternatively, you can define a key-store and a certificate to configure the public key. This alternative key is used to verify tokens without the kid (Key ID) claim. Table A.105. token-realm oauth2-introspection Attributes Attribute Description client-id The identifier of the client on the OAuth2 Authorization Server. client-secret The secret of the client. client-ssl-context The SSL context to be used if the introspection endpoint is using HTTPS. host-name-verification-policy A policy that defines how host names should be verified when using HTTPS. You can set either of the following values for the attribute: ANY , which disables hostname verification. DEFAULT , which denies connections where a certificate mismatch occurs between a server name from the certificate and the connecting host. introspection-url The URL of token introspection endpoint. Table A.106. trust-manager Attributes Attribute Description algorithm The name of the algorithm to use to create the underlying TrustManagerFactory . This is provided by the JDK. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. More details on SunJSSE can be found in the Java Secure Socket Extension (JSSE) Reference Guide . alias-filter A filter to apply to the aliases returned from the keystore. This can either be a comma-separated list of aliases to return or one of the following formats: ALL:-alias1:-alias2 NONE:+alias1:+alias2 certificate-revocation-list Enables the certificate revocation list that can be checked by a trust manager. The attributes of certificate-revocation-list are: maximum-cert-path - The maximum number of non-self-issued intermediate certificates that can exist in a certification path. The default value is 5 . (Deprecated. Use maximum-cert-path in trust-manager ). path - The path to the configuration file that is used to initialize the provider. relative-to - The base path of the certificate revocation list file. See Using a Certificate Revocation List for more information. key-store Reference to the key-store to use to initialize the underlying TrustManagerFactory . maximum-cert-path The maximum number of non-self-issued intermediate certificates that can exist in a certification path. The default value is 5 . This attribute has been moved to trust-manager from certificate-revocation-list inside trust-manager in JBoss EAP 7.3. For backward compatibility, the attribute is also present in certificate-revocation-list . Going forward, use maximum-cert-path in trust-manager . Note Define maximum-cert-path in either trust-manager or in certificate-revocation-list not in both. only-leaf-cert Check revocation status of only the leaf certificate. This is an optional attribute. The default values is false . provider-name The name of the provider to use to create the underlying TrustManagerFactory . providers Reference to obtain the Provider[] to use when creating the underlying TrustManagerFactory . soft-fail When set to true , certificates with an unknown revocation status are accepted. This is an optional attribute. The default value is false . Table A.107. x500-attribute-principal-decoder Attributes Attribute Description attribute-name The name of the X.500 attribute to map. This can also be defined using the oid attribute. convert When set to true , the principal decoder will attempt to convert a principal to a X500Principal , if it is not already of that type. If the conversion fails, the original value is used as the principal. joiner The joining string. The default value is a period ( . ). maximum-segments The maximum number of occurrences of the attribute to map. The default value is 2147483647 . oid The OID of the X.500 attribute to map. This can also be defined using the attribute-name attribute. required-attributes The list of attribute names of the attributes that must be present in the principal required-oids The list of OIDs of the attributes that must be present in the principal. reverse If true the attribute values will be processed and returned in reverse order. The default value is false . start-segment The starting occurrence of the attribute you want to map. This uses a zero-based index and the default value is 0 . Table A.108. x509-subject-alternative-name-evidence-decoder Attributes Attribute Description alt-name-type The subject alternative name type. Must be one of the following subject alternative name types: directoryName dNSName iPAddress registeredID rfc822Name uniformResourceIdentifier This is a required attribute. segment The 0-based occurrence of the subject alternative name to map. This attribute is used when there is more than one subject alternative name of the given type. The default value is 0 . A.2. Configure Your Environment to use the BouncyCastle Provider You can configure your JBoss EAP installation to use a BouncyCastle provider. The Bouncy Castle JARs are not provided by Red Hat, and must be obtained directly from Bouncy Castle. Important Java 8 must be used when the BouncyCastle providers are specified, as the BouncyCastle APIs are only certified up to Java 8. Include both BouncyCastle JARs, beginning with bc-fips and bctls-fips , on your JDK's classpath. For Java 8 this is accomplished by placing the JAR files in USDJAVA_HOME/lib/ext . Using either of the following methods, include the BouncyCastle providers in your Java security configuration file: A default configuration file, java.security , is provided in your JDK, and can be updated to include the BouncyCastle providers. This file is used if no other security configuration files are specified. See the JDK vendor's documentation for the location of this file. Define a custom Java security configuration file and reference it by adding the -Djava.security.properties== /path/to/ java.security.properties system property. When referenced using two equal signs the default policy is overwritten, and only the providers defined in the referenced file are used. When a single equal sign is used, as in -Djava.security.properties= /path/to/ java.security.properties , then the providers are appended to the default security file, preferring to use the file passed in the argument when keys are specified in both files. This option is useful when having multiple JVMs running on the same host that require different security settings. An example configuration file that defines these providers is seen below. Example: BouncyCastle Security Policy Important If the default configuration file is updated, then every other security.provider.X line in this file, for example security.provider.2 , must increase its value of X to ensure that this provider is given priority. Each provider must have a unique priority. Configure the elytron subsystem to exclusively use the BouncyCastle providers. By default, the system is configured to use both the elytron and openssl providers. Because it also includes a TLS implementation, it is recommended to disable the OpenSSL provider to ensure the TLS implementation from Bouncy Castle is used. Reload the server for the changes to take effect. A.3. SASL Authentication Mechanisms Reference A.3.1. Support Level for SASL Authentication Mechanisms Name Support Level Comments ANONYMOUS Supported DIGEST-SHA-512 Technology Preview Supported but name not currently IANA registered. DIGEST-SHA-256 Technology Preview Supported but name not currently IANA registered. DIGEST-SHA Technology Preview Supported but name not currently IANA registered. DIGEST-MD5 Supported EXTERNAL Supported GS2-KRB5 Supported GS2-KRB5-PLUS Supported GSSAPI Supported JBOSS-LOCAL-USER Supported Supported but name not currently IANA registered. OAUTHBEARER Supported OTP Not supported PLAIN Supported SCRAM-SHA-1 Supported SCRAM-SHA-1-PLUS Supported SCRAM-SHA-256 Supported SCRAM-SHA-256-PLUS Supported SCRAM-SHA-384 Supported SCRAM-SHA-384-PLUS Supported SCRAM-SHA-512 Supported SCRAM-SHA-512-PLUS Supported 9798-U-RSA-SHA1-ENC Not supported 9798-M-RSA-SHA1-ENC Not supported 9798-U-DSA-SHA1 Not supported 9798-M-DSA-SHA1 Not supported 9798-U-ECDSA-SHA1 Not supported 9798-M-ECDSA-SHA1 Not supported A.3.2. SASL Authentication Mechanism Properties You can see a list of standard Java SASL authentication mechanism properties in the Java documentation . Other JBoss EAP-specific SASL authentication mechanism properties are listed in the following tables. Table A.109. SASL Properties Used During SASL Mechanism Negotiation or Authentication Exchange Property Client / Server Description com.sun.security.sasl.digest.realm Server Used by some SASL mechanisms, including the DIGEST-MD5 algorithm supplied with most Oracle JDKs, to provide the list of possible server realms to the mechanism. Each realm name must be separated by a space character ( U+0020 ). com.sun.security.sasl.digest.utf8 Client, server Used by some SASL mechanisms, including the DIGEST-MD5 algorithm supplied with most Oracle JDKs, to indicate that information exchange should take place using UTF-8 character encoding instead of the default Latin-1/ISO-8859-1 encoding. The default value is true . wildfly.sasl.authentication-timeout Server The amount of time, in seconds, after which a server should terminate an authentication attempt. The default value is 150 seconds. wildfly.sasl.channel-binding-required Client, server Indicates that a mechanism which supports channel binding is required. A value of true indicates that channel binding is required. Any other value, or lack of this property, indicates that channel binding is not required. wildfly.sasl.digest.alternative_protocols Server Supplies a separated list of alternative protocols that are acceptable in responses received from the client. The list can be space, comma, tab, or new line separated. wildfly.sasl.gssapi.client.delegate-credential Client Specifies if the GSSAPI mechanism supports credential delegation. If set to true , the credential is delegated from the client to the server. This property defaults to true if a GSSCredential is provided using the javax.security.sasl.credentials property. Otherwise, the default value is false . wildfly.sasl.gs2.client.delegate-credential Client Specifies if the GS2 mechanism supports credential delegation. If set to true , the credential is delegated from the client to the server. This property defaults to true if a GSSCredential is provided using a CredentialCallback . Otherwise, the default value is false . wildfly.sasl.local-user.challenge-path Server Specifies the directory in which the server generates the challenge file. The default value is the java.io.tmpdir system property. wildfly.sasl.local-user.default-user Server The user name to use for silent authentication. wildfly.sasl.local-user.quiet-auth Client Enables silent authentication for a local user. The default value is true . Note that the Jakarta Enterprise Beans client and naming client disables silent local authentication if this property is not explicitly defined and a callback handler or user name was specified in the client configuration. wildfly.sasl.local-user.use-secure-random Server Specifies whether the server uses a secure random number generator when creating the challenge. The default value is true . wildfly.sasl.mechanism-query-all Client, server Indicates that all possible supported mechanism names should be returned, regardless of the presence or absence of any other properties. This property is only effective on calls to SaslServerFactory#getMechanismNames(Map) or SaslClientFactory#getMechanismNames(Map) for Elytron-provided SASL factories. wildfly.sasl.otp.alternate-dictionary Client Provides an alternate dictionary to the OTP SASL mechanism. Each dictionary word must be separated by a space character ( U+0020 ). wildfly.sasl.relax-compliance Server The specifications for the SASL mechanisms mandate certain behavior and verification of that behavior at the opposite side of the connection. When interacting with other SASL mechanism implementations, some of these requirements are interpreted loosely. If this property is set to true , checking is relaxed where differences in specification interpretation has been identified. The default value is false . wildfly.sasl.scram.min-iteration-count Client, server The minimum iteration count to use for SCRAM. The default value is 4096 . wildfly.sasl.scram.max-iteration-count Client, server The maximum iteration count to use for SCRAM. The default value is 32786 . wildfly.sasl.secure-rng Client, server The algorithm name of a SecureRandom implementation to use. Using this property can improve security, at the cost of performance. wildfly.security.sasl.digest.ciphers Client, server Comma-separated list of supported ciphers that directly limits the set of supported ciphers for SASL mechanisms. Table A.110. SASL Properties Used After Authentication Property Client / Server Description wildfly.sasl.principal Client Contains the negotiated client principal after a successful SASL client-side authentication. wildfly.sasl.security-identity Server Contains the negotiated security identity after a successful SASL server-side authentication. A.4. Security Authorization Arguments Arguments to the security commands in JBoss EAP are determined by the defined mechanism. Each mechanism requires different properties, and it is recommended to use tab completion to examine the various requirements for the defined mechanism. Table A.111. Universal Arguments Attribute Description --mechanism Specifies the mechanism to enable or disable. A list of supported SASL mechanisms is available at Support Level for SASL Authentication Mechanisms , and the BASIC , CLIENT_CERT , DIGEST , DIGEST-SHA-256 , and FORM HTTP authentication mechanisms are currently supported. --no-reload If specified, then the server is not reloaded after the security command is completed. Mechanism Specific Attributes The following attributes are only eligible for specific mechanisms. They are grouped below based on their function. Table A.112. key-store Realm Attribute Description --key-store-name The name of the truststore as an existing keystore. This must be specified if --key-store-realm-name is not used for the EXTERNAL SASL mechanism or the CLIENT_CERT HTTP mechanism. --key-store-realm-name The name of the truststore as an existing keystore realm. This must be specified if --key-store-name is not used for the EXTERNAL SASL mechanism or the CLIENT_CERT HTTP mechanism. --roles An optional argument that defines a comma separated list of roles associated with the current identity. If no existing role mapper contains the specified list of roles, then a role mapper will be generated and assigned. Table A.113. file-system Realm Attribute Description --exposed-realm The realm exposed to the user. --file-system-realm-name The name of the filesystem realm. --user-role-decoder The name of the role decoder used to extract the roles from the user's repository. This attribute is only used if --file-system-realm-name is specified. Table A.114. Properties Realm Attribute Description --exposed-realm The realm exposed to the user. This value must match the realm-name defined in the user's properties file. --groups-properties-file A path to the properties file that contains the groups attribute for management operations, or the roles for the undertow server. --properties-realm-name The name of an existing properties realm. --relative-to Adjusts the paths of --group-properties-file and --users-properties-file to be relative to a system property. --users-properties-file A path to the properties file that contains the user details. Table A.115. Miscellaneous Properties Attribute Description --management-interface The management interface to configure for management authentication commands. This defaults to the http-interface . --new-auth-factory-name Used to specify a name for the authentication factory. If not defined, a name is automatically created. --new-realm-name Used to specify a name for the properties file realm resource. If not defined, a name is automatically created. --new-security-domain Used to specify a name for the security domain. If not defined, a name is automatically created. --super-user Configures a local user with super-user permissions. Usable with the JBOSS-LOCAL-USER mechanism. A.5. Elytron Client Side One Way Example After configuring a server SSL context, it is important to test the configuration if possible. An Elytron client SSL context can be placed in a configuration file and then executed from the management CLI, allowing functional testing of the server configuration. These steps assume that the server-side configuration is completed, and the server has been reloaded if necessary. If the server keystore already exists, then proceed to the step; otherwise, create the server keystore. If the server certificate has already been exported, then proceed to the step; otherwise, export the server certificate. Import the server certificate into the client's truststore. Define the client-side SSL context inside of example-security.xml . This configuration file contains an Elytron authentication-client that defines the authentication and SSL configuration for outbound connections. The following file demonstrates defining a client SSL context and keystore. <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="clientStore" type="jks" > <file name="/path/to/client.truststore.jks"/> <key-store-clear-password password="secret" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-SSL-context"> <trust-store key-store-name="clientStore" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-SSL-context" /> </ssl-context-rules> </authentication-client> </configuration> Using the management CLI, reference the newly created file and attempt to access the server. The following command accesses the management interface and executes the whoami command. A.6. Elytron Client Side Two Way Example After configuring a server SSL context, it is important to test the configuration if possible. An Elytron client SSL context can be placed in a configuration file and then executed from the management CLI, allowing functional testing of the server configuration. These steps assume that the server-side configuration is completed, and the server has been reloaded if necessary. If the server and client keystores already exist, then proceed to the step; otherwise, create the server and client keystores. If the server and client certificates have already been exported, then proceed to the step; otherwise, export the server and client certificates. Import the server certificate into the client's truststore. Import the client certificate into the server's truststore. Define the client-side SSL context inside of example-security.xml . This configuration file contains an Elytron authentication-client that defines the authentication and SSL configuration for outbound connections. The following file demonstrates defining a client SSL context and keystore. <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="clientStore" type="jks" > <file name="/path/to/client.truststore.jks"/> <key-store-clear-password password="secret" /> </key-store> </key-stores> <key-store name="clientKeyStore" type="jks" > <file name="/path/to/client.keystore.jks"/> <key-store-clear-password password="secret" /> </key-store> <ssl-contexts> <ssl-context name="client-SSL-context"> <trust-store key-store-name="clientStore" /> <key-store-ssl-certificate key-store-name="clientKeyStore" alias="client"> <key-store-clear-password password="secret" /> </key-store-ssl-certificate> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-SSL-context" /> </ssl-context-rules> </authentication-client> </configuration> Using the management CLI, reference the newly created file and attempt to access the server. The following command accesses the management interface and executes the whoami command. Revised on 2024-01-17 05:25:10 UTC
[ "We can override the values in the JRE_HOME/lib/security/java.security file here. If both properties files specify values for the same key, the value from the command-line properties file is selected, as it is the last one loaded. We can reorder and change security providers in this file. security.provider.1=org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider security.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:BCFIPS security.provider.3=sun.security.provider.Sun security.provider.4=com.sun.crypto.provider.SunJCE This is a comma-separated list of algorithm and/or algorithm:provider entries. # securerandom.strongAlgorithms=DEFAULT:BCFIPS", "/subsystem=elytron:write-attribute(name=final-providers,value=elytron)", "reload", "keytool -genkeypair -alias localhost -keyalg RSA -keysize 1024 -validity 365 -keystore server.keystore.jks -dname \"CN=localhost\" -keypass secret -storepass secret", "keytool -exportcert -keystore server.keystore.jks -alias localhost -keypass secret -storepass secret -file server.cer", "keytool -importcert -keystore client.truststore.jks -storepass secret -alias localhost -trustcacerts -file server.cer", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"clientStore\" type=\"jks\" > <file name=\"/path/to/client.truststore.jks\"/> <key-store-clear-password password=\"secret\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-SSL-context\"> <trust-store key-store-name=\"clientStore\" /> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-SSL-context\" /> </ssl-context-rules> </authentication-client> </configuration>", "EAP_HOME /bin/jboss-cli.sh -c --controller=remote+https://127.0.0.1:9993 -Dwildfly.config.url=/path/to/example-security.xml :whoami", "keytool -genkeypair -alias localhost -keyalg RSA -keysize 1024 -validity 365 -keystore server.keystore.jks -dname \"CN=localhost\" -keypass secret -storepass secret keytool -genkeypair -alias client -keyalg RSA -keysize 1024 -validity 365 -keystore client.keystore.jks -dname \"CN=client\" -keypass secret -storepass secret", "keytool -exportcert -keystore server.keystore.jks -alias localhost -keypass secret -storepass secret -file server.cer keytool -exportcert -keystore client.keystore.jks -alias client -keypass secret -storepass secret -file client.cer", "keytool -importcert -keystore client.truststore.jks -storepass secret -alias localhost -trustcacerts -file server.cer", "keytool -importcert -keystore server.truststore.jks -storepass secret -alias client -trustcacerts -file client.cer", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"clientStore\" type=\"jks\" > <file name=\"/path/to/client.truststore.jks\"/> <key-store-clear-password password=\"secret\" /> </key-store> </key-stores> <key-store name=\"clientKeyStore\" type=\"jks\" > <file name=\"/path/to/client.keystore.jks\"/> <key-store-clear-password password=\"secret\" /> </key-store> <ssl-contexts> <ssl-context name=\"client-SSL-context\"> <trust-store key-store-name=\"clientStore\" /> <key-store-ssl-certificate key-store-name=\"clientKeyStore\" alias=\"client\"> <key-store-clear-password password=\"secret\" /> </key-store-ssl-certificate> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-SSL-context\" /> </ssl-context-rules> </authentication-client> </configuration>", "EAP_HOME /bin/jboss-cli.sh -c --controller=remote+https://127.0.0.1:9993 -Dwildfly.config.url=/path/to/example-security.xml :whoami" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/reference_material
Chapter 2. New features
Chapter 2. New features This section highlights new features in Red Hat Developer Hub 1.2. 2.1. Backstage version update Red Hat Developer Hub is now based on the upstream Backstage project v1.26.5. 2.2. Telemetry With this update, you can use the telemetry data collection feature, which is enabled by default. Analyzing the collected data helps in improving your experience with Red Hat Developer Hub. You can disable this feature based on your needs. For more information, see the Disabling telemetry data collection in RHDH section. 2.3. Audit logging Administrators can view details about application changes, including the name and role of the user who made the change and the time that the change was made. Audit log data is captured by the RBAC plugin and scaffolder actions by default. Administrators can now use the audit logs to view changes to the catalog database. Tracking changes that add, remove, or update data in the catalog database helps ensure the accountability and transparency of actions. 2.4. RBAC conditional policies You can now use RBAC conditional policies in Red Hat Developer Hub, enabling access control based on dynamic conditions. These conditions act as content filters for Developer Hub resources that are managed by the RBAC plugin. You can specify the conditional policies for the Keycloak and Quay Actions plugins. Also, you must consider reviewing your security needs for components that do not have RBAC controls. For more information, see Role-Based Access Control (RBAC) in Red Hat Developer Hub . 2.5. RBAC permissions for OCM and Topology plugins Basic permissions for OCM and Topology plugins are now added to the Red Hat Developer Hub. You must consider reviewing your security needs for components that do not have RBAC controls. 2.6. Support for corporate proxy With this update, you can run the RHDH application behind a corporate proxy by setting the HTTP_PROXY or HTTPS_PROXY environment variable. Also, you can set the NO_PROXY environment variable to exclude certain domains from proxying. For more information, see the Running the RHDH application behind a corporate proxy section. 2.7. Support for external PostgreSQL databases With this update, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. Based on your needs, you can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart. For more information, see the Configuring external PostgreSQL databases section. You can configure an RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. For more information, see the Configuring an RHDH instance with a TLS connection in Kubernetes . 2.8. New plugins included in Red Hat Developer Hub 1.2 The following additional plugins are included in Red Hat Developer Hub 1.2: HTTP Request action - @roadiehq/scaffolder-backend-module-http-request Microsoft Azure repository actions for the scaffolder-backend - @parfuemerie-douglas/scaffolder-backend-module-azure-repositories Catalog backend module for GitLab organizational data - @backstage/plugin-catalog-backend-module-gitlab-org Catalog backend module for scaffolder relation catalog processor - @janus-idp/backstage-plugin-catalog-backend-module-scaffolder-relation-processor A second ArgoCD frontend plugin - @janus-idp/backstage-plugin-argocd For a comprehensive list of supported dynamic plugins, see the Preinstalled dynamic plugins . 2.9. Theme updates in Red Hat Developer Hub With this update, theme configurations are enhanced to change the look and feel of different UI components so that they almost resemble the theme that is usually used in designing Red Hat applications. You might notice changes in UI components, such as buttons, tabs, sidebars, cards, and tables along with some changes in background color and font used on the RHDH pages. You can update the app-config.yaml file to change the look of multiple Developer Hub theme components for enhanced customization. 2.10. scaffolderFieldExtensions configuration option You can now use the scaffolderFieldExtensions configuration option in a dynamic plugin's front-end configuration. The scaffolderFieldExtensions option allows a dynamic plugin to specify one or more exported components to be provided to the scaffolder plugin as field extensions. These scaffolder field extensions provide custom form field components for the software template wizard. 2.11. Enhancement to ConfigMap or Secret configuration In versions, updating ConfigMaps or Secrets specified in Backstage.spec.Application required recreating the Pod to apply changes. Beginning with version 1.2, this process is automated. 2.12. Ability to configure learning paths You can now configure Learning Paths in Developer Hub to create a dynamic experience tailored to your specific learning needs. 2.13. Plugin version upgrades in Red Hat Developer Hub 1.2.2 In Red Hat Developer Hub 1.2.2, the following plugin versions are upgraded as follows: Plugin Version in 1.2.0 Version in 1.2.2 @janus-idp/backstage-plugin-3scale-backend 1.5.13 1.5.15 @janus-idp/backstage-plugin-aap-backend 1.6.13 1.6.15 @janus-idp/backstage-plugin-acr 1.4.11 1.4.13 @janus-idp/backstage-plugin-analytics-provider-segment 1.4.7 1.4.9 @janus-idp/backstage-plugin-argocd 1.1.6 1.2.3 @janus-idp/backstage-plugin-jfrog-artifactory 1.4.9 1.4.11 @janus-idp/backstage-plugin-keycloak-backend 1.9.10 1.9.12 @janus-idp/backstage-plugin-nexus-repository-manager 1.6.8 1.6.10 @janus-idp/backstage-plugin-ocm 4.1.6 4.1.8 @janus-idp/backstage-plugin-ocm-backend 4.0.6 4.0.8 @janus-idp/backstage-plugin-quay 1.7.6 1.7.8 @janus-idp/backstage-scaffolder-backend-module-quay 1.4.10 1.4.12 @janus-idp/backstage-plugin-rbac 1.20.11 1.23.2 @janus-idp/backstage-scaffolder-backend-module-regex 1.4.10 1.4.12 @janus-idp/backstage-plugin-catalog-backend-module-scaffolder-relation-processor 1.0.1 1.0.3 @janus-idp/backstage-scaffolder-backend-module-servicenow 1.4.12 1.4.14 @janus-idp/backstage-scaffolder-backend-module-sonarqube 1.4.10 1.4.12 @janus-idp/backstage-plugin-tekton 3.7.5 3.7.7 @janus-idp/backstage-plugin-topology 1.21.7 1.21.10 @janus-idp/backstage-plugin-rbac 1.23.2 1.24.1
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/release_notes_for_red_hat_developer_hub_1.2/con-relnotes-notable-features_release-notes-rhdh
Chapter 2. OpenShift CLI (oc)
Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage Red Hat OpenShift Service on AWS (ROSA) projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting ROSA operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with ROSA from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in ROSA. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the Red Hat OpenShift Service on AWS downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the Red Hat OpenShift Service on AWS downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the Red Hat OpenShift Service on AWS downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with Red Hat OpenShift Service on AWS (ROSA) from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in ROSA. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active Red Hat OpenShift Service on AWS (ROSA) subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for a ROSA subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by ROSA. # subscription-manager repos --enable="rhocp-4-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to a ROSA cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the ROSA server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an Red Hat OpenShift Service on AWS cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the Red Hat OpenShift Service on AWS server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the Red Hat OpenShift Service on AWS web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and ROSA resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because Red Hat OpenShift Service on AWS (ROSA) is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with ROSA , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional ROSA features, including: Full support for ROSA resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to ROSA distributions, and build upon standard Kubernetes primitives. Authentication Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in ROSA . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your ROSA server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new ROSA users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the ROSA cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of the Red Hat OpenShift Service on AWS (ROSA) server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple ROSA servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, ROSA creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for ROSA clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.6.1. OpenShift CLI (oc) developer commands 2.6.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.6.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.6.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.6.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.6.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.6.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.6.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.6.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.6.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account "foo" of namespace "dev" can list pods # in the namespace "prod". # You must be allowed to use impersonation for the global option "--as". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.6.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.6.1.11. oc auth whoami Experimental: Check self subject attributes Example usage # Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json 2.6.1.12. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.6.1.13. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.6.1.14. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.6.1.15. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.6.1.16. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # oc shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.6.1.17. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.6.1.18. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.6.1.19. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.6.1.20. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.6.1.21. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.6.1.22. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.6.1.23. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.6.1.24. oc config new-admin-kubeconfig Generate, make the server trust, and display a new admin.kubeconfig Example usage # Generate a new admin kubeconfig oc config new-admin-kubeconfig 2.6.1.25. oc config new-kubelet-bootstrap-kubeconfig Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig Example usage # Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig 2.6.1.26. oc config refresh-ca-bundle Update the OpenShift CA bundle by contacting the API server Example usage # Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run 2.6.1.27. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.6.1.28. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.6.1.29. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.6.1.30. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.6.1.31. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the "cluster-admin" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.6.1.32. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.6.1.33. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.6.1.34. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.6.1.35. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.6.1.36. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.6.1.37. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.6.1.38. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.6.1.39. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.6.1.40. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.6.1.41. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.42. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.6.1.43. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx 2.6.1.44. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.6.1.45. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.6.1.46. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.6.1.47. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.6.1.48. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.6.1.49. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.6.1.50. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.6.1.51. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.6.1.52. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.6.1.53. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.6.1.54. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.6.1.55. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev 2.6.1.56. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.6.1.57. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.6.1.58. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.6.1.59. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.6.1.60. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.61. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key 2.6.1.62. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.6.1.63. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.6.1.64. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.6.1.65. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.6.1.66. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.6.1.67. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.6.1.68. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.6.1.69. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.6.1.70. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.6.1.71. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.6.1.72. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.6.1.73. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.6.1.74. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status' 2.6.1.75. oc events List events Example usage # List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal 2.6.1.76. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.6.1.77. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2 2.6.1.78. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.6.1.79. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.6.1.80. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status 2.6.1.81. oc get-token Experimental: Get token from external OIDC issuer as credentials exec plugin Example usage # Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343 2.6.1.82. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.6.1.83. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz 2.6.1.84. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.6.1.85. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.6.1.86. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=linux/386 \ --keep-manifest-list=true 2.6.1.87. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.6.1.88. oc kustomize Build a kustomization target from a directory or URL Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.6.1.89. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.6.1.90. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080 2.6.1.91. oc logout End the current server session Example usage # Log out oc logout 2.6.1.92. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.6.1.93. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.6.1.94. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.6.1.95. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.6.1.96. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.6.1.97. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.6.1.98. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.6.1.99. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.6.1.100. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.6.1.101. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.6.1.102. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.6.1.103. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.6.1.104. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.6.1.105. oc projects Display existing projects Example usage # List all projects oc projects 2.6.1.106. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.6.1.107. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.6.1.108. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.6.1.109. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.6.1.110. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.6.1.111. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.6.1.112. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.6.1.113. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.6.1.114. oc rollout restart Restart a resource Example usage # Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.6.1.115. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.6.1.116. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.6.1.117. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.6.1.118. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.6.1.119. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled 2.6.1.120. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.6.1.121. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.6.1.122. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.6.1.123. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.6.1.124. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.6.1.125. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.6.1.126. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.6.1.127. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.6.1.128. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.6.1.129. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.6.1.130. oc set image Update the image of a pod template Example usage # Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.6.1.131. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.6.1.132. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.6.1.133. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.6.1.134. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.6.1.135. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.6.1.136. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.6.1.137. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.6.1.138. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.6.1.139. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.6.1.140. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.6.1.141. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.6.1.142. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.6.1.143. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client 2.6.1.144. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod "busybox1" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1 # Wait for the service "loadbalancer" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.6.1.145. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) administrator commands 2.7.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.7.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true 2.7.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.7.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.7.1.5. oc adm copy-to-node Copy specified files to the node Example usage # Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0 2.7.1.6. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.7.1.7. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.7.1.8. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.7.1.9. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.7.1.10. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.7.1.11. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.7.1.12. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.7.1.13. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.7.1.14. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.15. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.7.1.16. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.7.1.17. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.7.1.18. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s) Example usage # Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.7.1.19. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.7.1.20. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.7.1.21. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.7.1.22. oc adm node-image create Create an ISO image for booting the nodes to be added to the target cluster Example usage # Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda 2.7.1.23. oc adm node-image monitor Monitor new nodes being added to an OpenShift cluster Example usage # Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84 2.7.1.24. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron 2.7.1.25. oc adm ocp-certificates monitor-certificates Watch platform certificates Example usage # Watch platform certificates oc adm ocp-certificates monitor-certificates 2.7.1.26. oc adm ocp-certificates regenerate-leaf Regenerate client and serving certificates of an OpenShift cluster Example usage # Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key 2.7.1.27. oc adm ocp-certificates regenerate-machine-config-server-serving-cert Regenerate the machine config operator certificates in an OpenShift cluster Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.28. oc adm ocp-certificates regenerate-top-level Regenerate the top level certificates in an OpenShift cluster Example usage # Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key 2.7.1.29. oc adm ocp-certificates remove-old-trust Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster Example usage # Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z 2.7.1.30. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server Update user-data secrets in an OpenShift cluster to use updated MCO certfs Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.31. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.7.1.32. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.7.1.33. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.7.1.34. oc adm policy add-cluster-role-to-group Add a role to groups for all projects in the cluster Example usage # Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins 2.7.1.35. oc adm policy add-cluster-role-to-user Add a role to users for all projects in the cluster Example usage # Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser 2.7.1.36. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.7.1.37. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.7.1.38. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.7.1.39. oc adm policy remove-cluster-role-from-group Remove a role from groups for all projects in the cluster Example usage # Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins 2.7.1.40. oc adm policy remove-cluster-role-from-user Remove a role from users for all projects in the cluster Example usage # Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser 2.7.1.41. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.7.1.42. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.43. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.7.1.44. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.7.1.45. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.46. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.7.1.47. oc adm prune renderedmachineconfigs Prunes rendered MachineConfigs in an OpenShift cluster Example usage # See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm 2.7.1.48. oc adm prune renderedmachineconfigs list Lists rendered MachineConfigs in an OpenShift cluster Example usage # List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use 2.7.1.49. oc adm reboot-machine-config-pool Initiate reboot of the specified MachineConfigPool Example usage # Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master 2.7.1.50. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.51. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.52. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.7.1.53. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.7.1.54. oc adm restart-kubelet Restart kubelet on the specified nodes Example usage # Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig 2.7.1.55. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.7.1.56. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.7.1.57. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.7.1.58. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.7.1.59. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.7.1.60. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.7.1.61. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.7.1.62. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.7.1.63. oc adm wait-for-node-reboot Wait for nodes to reboot after running oc adm reboot-machine-config-pool Example usage # Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4 2.7.1.64. oc adm wait-for-stable-cluster Wait for the platform operators to become stable Example usage # Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m 2.7.2. Additional resources OpenShift CLI developer command reference
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4-for-rhel-8-x86_64-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc <command>", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc login <cluster_url> --web 1", "Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.", "Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # oc shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Generate a new admin kubeconfig oc config new-admin-kubeconfig", "Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig", "Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status'", "List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status", "Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=linux/386 --keep-manifest-list=true", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "List all available plugins oc plugin list", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod \"busybox1\" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1 # Wait for the service \"loadbalancer\" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda", "Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84", "Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron", "Watch platform certificates oc adm ocp-certificates monitor-certificates", "Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key", "Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z", "Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins", "Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins", "Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm", "List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use", "Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11", "Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4", "Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cli_tools/openshift-cli-oc
Configuring a Red Hat High Availability cluster on Red Hat OpenStack Platform
Configuring a Red Hat High Availability cluster on Red Hat OpenStack Platform Red Hat Enterprise Linux 8 Installing and configuring HA clusters and cluster resources on RHOSP instances Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/index
probe::tcp.setsockopt.return
probe::tcp.setsockopt.return Name probe::tcp.setsockopt.return - Return from setsockopt Synopsis tcp.setsockopt.return Values ret Error code (0: no error) name Name of this probe Context The process which calls setsockopt
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcp-setsockopt-return
Chapter 2. Post installation configuration
Chapter 2. Post installation configuration After completing your installation you need to connect Automation Decisions (Event-Driven Ansible controller) to Automation Execution (automation controller) to run rulebook activations successfully. To do this for a RPM-based install, follow the steps provided in Setting up a Red Hat Ansible Automation Platform credential .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_event-driven_ansible_2.5_with_ansible_automation_platform_2.4/assembly-eda-controller-post-install
Chapter 3. Red Hat Quay Robot Account overview
Chapter 3. Red Hat Quay Robot Account overview Robot Accounts are used to set up automated access to the repositories in your Red Hat Quay. registry. They are similar to OpenShift Container Platform service accounts. Setting up a Robot Account results in the following: Credentials are generated that are associated with the Robot Account. Repositories and images that the Robot Account can push and pull images from are identified. Generated credentials can be copied and pasted to use with different container clients, such as Docker, Podman, Kubernetes, Mesos, and so on, to access each defined repository. Robot Accounts can help secure your Red Hat Quay registry by offering various security advantages, such as the following: Specifying repository access. Granular permissions, such as Read (pull) or Write (push) access. They can also be equipped with Admin permissions if warranted. Designed for CI/CD pipelines, system integrations, and other automation tasks, helping avoid credential exposure in scripts, pipelines, or other environment variables. Robot Accounts use tokens instead of passwords, which provides the ability for an administrator to revoke the token in the event that it is compromised. Each Robot Account is limited to a single user namespace or Organization. For example, the Robot Account could provide access to all repositories for the user quayadmin . However, it cannot provide access to repositories that are not in the user's list of repositories. Robot Accounts can be created using the Red Hat Quay UI, or through the CLI using the Red Hat Quay API. After creation, Red Hat Quay administrators can leverage more advanced features with Robot Accounts, such as keyless authentication. 3.1. Creating a robot account by using the UI Use the following procedure to create a robot account using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Robot accounts tab Create robot account . In the Provide a name for your robot account box, enter a name, for example, robot1 . The name of your Robot Account becomes a combination of your username plus the name of the robot, for example, quayadmin+robot1 Optional. The following options are available if desired: Add the robot account to a team. Add the robot account to a repository. Adjust the robot account's permissions. On the Review and finish page, review the information you have provided, then click Review and finish . The following alert appears: Successfully created robot account with robot name: <organization_name> + <robot_name> . Alternatively, if you tried to create a robot account with the same name as another robot account, you might receive the following error message: Error creating robot account . Optional. You can click Expand or Collapse to reveal descriptive information about the robot account. Optional. You can change permissions of the robot account by clicking the kebab menu Set repository permissions . The following message appears: Successfully updated repository permission . Optional. You can click the name of your robot account to obtain the following information: Robot Account : Select this obtain the robot account token. You can regenerate the token by clicking Regenerate token now . Kubernetes Secret : Select this to download credentials in the form of a Kubernetes pull secret YAML file. Podman : Select this to copy a full podman login command line that includes the credentials. Docker Configuration : Select this to copy a full docker login command line that includes the credentials. 3.2. Creating a robot account by using the Red Hat Quay API Use the following procedure to create a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new robot account for an organization using the PUT /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>" Example output {"name": "orgname+robot-name", "created": "Fri, 10 May 2024 15:11:00 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} Enter the following command to create a new robot account for the current user with the PUT /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/user/robots/<robot_name>" Example output {"name": "quayadmin+robot-name", "created": "Fri, 10 May 2024 15:24:57 -0000", "last_accessed": null, "description": "", "token": "<example_secret>", "unstructured_metadata": null} 3.3. Bulk managing robot account repository access Use the following procedure to manage, in bulk, robot account repository access by using the Red Hat Quay v2 UI. Prerequisites You have created a robot account. You have created multiple repositories under a single organization. Procedure On the Red Hat Quay v2 UI landing page, click Organizations in the navigation pane. On the Organizations page, select the name of the organization that has multiple repositories. The number of repositories under a single organization can be found under the Repo Count column. On your organization's page, click Robot accounts . For the robot account that will be added to multiple repositories, click the kebab icon Set repository permissions . On the Set repository permissions page, check the boxes of the repositories that the robot account will be added to. For example: Set the permissions for the robot account, for example, None , Read , Write , Admin . Click save . An alert that says Success alert: Successfully updated repository permission appears on the Set repository permissions page, confirming the changes. Return to the Organizations Robot accounts page. Now, the Repositories column of your robot account shows the number of repositories that the robot account has been added to. 3.4. Disabling robot accounts by using the UI Red Hat Quay administrators can manage robot accounts by disallowing users to create new robot accounts. Important Robot accounts are mandatory for repository mirroring. Setting the ROBOTS_DISALLOW configuration field to true breaks mirroring configurations. Users mirroring repositories should not set ROBOTS_DISALLOW to true in their config.yaml file. This is a known issue and will be fixed in a future release of Red Hat Quay. Use the following procedure to disable robot account creation. Prerequisites You have created multiple robot accounts. Procedure Update your config.yaml field to add the ROBOTS_DISALLOW variable, for example: ROBOTS_DISALLOW: true Restart your Red Hat Quay deployment. Verification: Creating a new robot account Navigate to your Red Hat Quay repository. Click the name of a repository. In the navigation pane, click Robot Accounts . Click Create Robot Account . Enter a name for the robot account, for example, <organization-name/username>+<robot-name> . Click Create robot account to confirm creation. The following message appears: Cannot create robot account. Robot accounts have been disabled. Please contact your administrator. Verification: Logging into a robot account On the command-line interface (CLI), attempt to log in as one of the robot accounts by entering the following command: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" <quay-server.example.com> The following error message is returned: Error: logging into "<quay-server.example.com>": invalid username/password You can pass in the log-level=debug flag to confirm that robot accounts have been deactivated: USD podman login -u="<organization-name/username>+<robot-name>" -p="KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678" --log-level=debug <quay-server.example.com> ... DEBU[0000] error logging into "quay-server.example.com": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator. 3.5. Regenerating a robot account token by using the Red Hat Quay API Use the following procedure to regenerate a robot account token using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to regenerate a robot account token for an organization using the POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" Example output {"name": "test-org+test", "created": "Fri, 10 May 2024 17:46:02 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} Enter the following command to regenerate a robot account token for the current user with the POST /api/v1/user/robots/{robot_shortname}/regenerate endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" Example output {"name": "quayadmin+test", "created": "Fri, 10 May 2024 14:12:11 -0000", "last_accessed": null, "description": "", "token": "<example_secret>"} 3.6. Deleting a robot account by using the UI Use the following procedure to delete a robot account using the Red Hat Quay UI. Procedure Log into your Red Hat Quay registry: Click the name of the Organization that has the robot account. Click Robot accounts . Check the box of the robot account to be deleted. Click the kebab menu. Click Delete . Type confirm into the textbox, then click Delete . 3.7. Deleting a robot account by using the Red Hat Quay API Use the following procedure to delete a robot account using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete a robot account for an organization using the DELETE /api/v1/organization/{orgname}/robots/{robot_shortname} endpoint: curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>" The CLI does not return information when deleting a robot account with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/organization/{orgname}/robots command to see if details are returned for the robot account: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots" Example output {"robots": []} Enter the following command to delete a robot account for the current user with the DELETE /api/v1/user/robots/{robot_shortname} endpoint: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" The CLI does not return information when deleting a robot account for the current user with the API. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/user/robots/{robot_shortname} command to see if details are returned for the robot account: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" Example output {"message":"Could not find robot with specified username"}
[ "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"", "{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"", "{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}", "ROBOTS_DISALLOW: true", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>", "Error: logging into \"<quay-server.example.com>\": invalid username/password", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>", "DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"", "{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"", "{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"", "{\"robots\": []}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "{\"message\":\"Could not find robot with specified username\"}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/managing_access_and_permissions/allow-robot-access-user-repo
3.4. Role Mapping LoginModule
3.4. Role Mapping LoginModule If the LoginModule you are using exposes role names that you wish to map to more application specific names, then you can use the RoleMappingLoginModule. This uses a properties file to inject additional role names, and optionally replace the existing role, on authenticated subjects. This is what the security domain should look like:
[ "<subsystem xmlns=\"urn:jboss:domain:security:1.2\"> <security-domains> <security-domain name=\"jdv_security_domain\"> <authentication> <login-module code=\"org.jboss.security.auth.spi.RoleMappingLoginModule\" flag=\"optional\"> <module-option name=\"rolesProperties\" value=\"USD{jboss.server.base.dir}/configuration/roles.properties\" /> <module-option name=\"replaceRole\" value=\"false\" /> </login-module> </authentication> </security-domain> </security-domains> </subsystem>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/role_mapping_loginmodule
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space may vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. For deployments having three failure domains, you can scale up the storage by adding disks in the multiple of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is flexibility in adding the number of disks. In this case, you can add any number of disks. In order to check if flexible scaling is enabled or not, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Disks to be used for scaling are already attached to the storage node LocalVolumeDiscovery and LocalVolumeSet objects are already created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While we recommend adding nodes in the multiple of three, you still get the flexibility of adding one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, expand the cluster first using the instructions that can be found here . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Figure 4.1. YAML showing the addition of new hostnames Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/scaling_storage/scaling_storage_of_bare_metal_openshift_data_foundation_cluster
8.246. yaboot
8.246. yaboot 8.246.1. RHBA-2013:1561 - yaboot bug fix and enhancement update Updated yaboot packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The yaboot packages provide a boot loader for Open Firmware based PowerPC systems. Yaboot can be used to boot IBM eServer System p machines. Bug Fixes BZ# 903855 Previously, the client was overwriting the gateway IP address for the Trivial File Transfer Protocol (TFTP) file transfer. When installing through the network using VLAN tags, the boot failed when the server was in a different IP subnetwork. This update ensures that the rest of the parameter strip can be parsed correctly, and failures no longer occur in the aforementioned scenario. BZ# 968046 As there was not enough room between the first allocation and the bottom of the firmware, user attempts to load a ramdisk failed when the firmware was at 32MB (0200000). This update adds the ability to be able to determine how big the initrd memory will be yaboot . As a result, yaboot can accurately place a buffer in the memory, and ramdisk load failures no longer occur. Enhancement BZ# 947101 This update adds GUID Partition Table (GPT) support to yaboot, because previously yaboot supported the DOS partition format which has a limit of up to 2TB for 512B sectors. So even if there are large disks, this limit forces users to format all devices to 2TB. With GPT support in yaboot, users can now use larger disks. Users of yaboot are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/yaboot
Chapter 2. APIService [apiregistration.k8s.io/v1]
Chapter 2. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. status object APIServiceStatus contains derived information about an API server 2.1.1. .spec Description APIServiceSpec contains information for locating and communicating with a server. Only https is supported, though you are able to disable certificate verification. Type object Required groupPriorityMinimum versionPriority Property Type Description caBundle string CABundle is a PEM encoded CA bundle which will be used to validate an API server's serving certificate. If unspecified, system trust roots on the apiserver are used. group string Group is the API group name this server hosts groupPriorityMinimum integer GroupPriorityMinimum is the priority this group should have at least. Higher priority means that the group is preferred by clients over lower priority ones. Note that other versions of this group might specify even higher GroupPriorityMinimum values such that the whole group gets a higher priority. The primary sort is based on GroupPriorityMinimum, ordered highest number to lowest (20 before 10). The secondary sort is based on the alphabetical comparison of the name of the object. (v1.bar before v1.foo) We'd recommend something like: *.k8s.io (except extensions) at 18000 and PaaSes (OpenShift, Deis) are recommended to be in the 2000s insecureSkipTLSVerify boolean InsecureSkipTLSVerify disables TLS certificate verification when communicating with this server. This is strongly discouraged. You should use the CABundle instead. service object ServiceReference holds a reference to Service.legacy.k8s.io version string Version is the API version this server hosts. For example, "v1" versionPriority integer VersionPriority controls the ordering of this API version inside of its group. Must be greater than zero. The primary sort is based on VersionPriority, ordered highest to lowest (20 before 10). Since it's inside of a group, the number can be small, probably in the 10s. In case of equal version priorities, the version string will be used to compute the order inside a group. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. 2.1.2. .spec.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Property Type Description name string Name is the name of the service namespace string Namespace is the namespace of the service port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 2.1.3. .status Description APIServiceStatus contains derived information about an API server Type object Property Type Description conditions array Current service state of apiService. conditions[] object APIServiceCondition describes the state of an APIService at a particular point 2.1.4. .status.conditions Description Current service state of apiService. Type array 2.1.5. .status.conditions[] Description APIServiceCondition describes the state of an APIService at a particular point Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string Human-readable message indicating details about last transition. reason string Unique, one-word, CamelCase reason for the condition's last transition. status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. 2.2. API endpoints The following API endpoints are available: /apis/apiregistration.k8s.io/v1/apiservices DELETE : delete collection of APIService GET : list or watch objects of kind APIService POST : create an APIService /apis/apiregistration.k8s.io/v1/watch/apiservices GET : watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiregistration.k8s.io/v1/apiservices/{name} DELETE : delete an APIService GET : read the specified APIService PATCH : partially update the specified APIService PUT : replace the specified APIService /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} GET : watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status GET : read status of the specified APIService PATCH : partially update status of the specified APIService PUT : replace status of the specified APIService 2.2.1. /apis/apiregistration.k8s.io/v1/apiservices HTTP method DELETE Description delete collection of APIService Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind APIService Table 2.3. HTTP responses HTTP code Reponse body 200 - OK APIServiceList schema 401 - Unauthorized Empty HTTP method POST Description create an APIService Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body APIService schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 202 - Accepted APIService schema 401 - Unauthorized Empty 2.2.2. /apis/apiregistration.k8s.io/v1/watch/apiservices HTTP method GET Description watch individual changes to a list of APIService. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/apiregistration.k8s.io/v1/apiservices/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the APIService HTTP method DELETE Description delete an APIService Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIService Table 2.11. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIService Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIService Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body APIService schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty 2.2.4. /apis/apiregistration.k8s.io/v1/watch/apiservices/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the APIService HTTP method GET Description watch changes to an object of kind APIService. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/apiregistration.k8s.io/v1/apiservices/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the APIService HTTP method GET Description read status of the specified APIService Table 2.20. HTTP responses HTTP code Reponse body 200 - OK APIService schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIService Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIService Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body APIService schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK APIService schema 201 - Created APIService schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extension_apis/apiservice-apiregistration-k8s-io-v1
Chapter 15. Volume cloning
Chapter 15. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 15.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/volume-cloning_rhodf
Chapter 5. Time string utility function
Chapter 5. Time string utility function Utility function to turn seconds since the epoch (as returned by the timestamp function gettimeofday_s()) into a human readable date/time string.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/ctime.stp
Chapter 15. Syncing LDAP groups
Chapter 15. Syncing LDAP groups As an administrator with the dedicated-admin role, you can use groups to manage users, change their permissions, and enhance collaboration. Your organization may have already created user groups and stored them in an LDAP server. OpenShift Dedicated can sync those LDAP records with internal OpenShift Dedicated records, enabling you to manage your groups in one place. OpenShift Dedicated currently supports group sync with LDAP servers using three common schemas for defining group membership: RFC 2307, Active Directory, and augmented Active Directory. For more information on configuring LDAP, see Configuring an LDAP identity provider . Note You must have dedicated-admin privileges to sync groups. 15.1. About configuring LDAP sync Before you can run LDAP sync, you need a sync configuration file. This file contains the following LDAP client configuration details: Configuration for connecting to your LDAP server. Sync configuration options that are dependent on the schema used in your LDAP server. An administrator-defined list of name mappings that maps OpenShift Dedicated group names to groups in your LDAP server. The format of the configuration file depends upon the schema you are using: RFC 2307, Active Directory, or augmented Active Directory. LDAP client configuration The LDAP client configuration section of the configuration defines the connections to your LDAP server. The LDAP client configuration section of the configuration defines the connections to your LDAP server. LDAP client configuration url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5 1 The connection protocol, IP address of the LDAP server hosting your database, and the port to connect to, formatted as scheme://host:port . 2 Optional distinguished name (DN) to use as the Bind DN. OpenShift Dedicated uses this if elevated privilege is required to retrieve entries for the sync operation. 3 Optional password to use to bind. OpenShift Dedicated uses this if elevated privilege is necessary to retrieve entries for the sync operation. This value may also be provided in an environment variable, external file, or encrypted file. 4 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 5 The certificate bundle to use for validating server certificates for the configured URL. If empty, OpenShift Dedicated uses system-trusted roots. This only applies if insecure is set to false . LDAP query definition Sync configurations consist of LDAP query definitions for the entries that are required for synchronization. The specific definition of an LDAP query depends on the schema used to store membership information in the LDAP server. LDAP query definition baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6 1 The distinguished name (DN) of the branch of the directory where all searches will start from. It is required that you specify the top of your directory tree, but you can also specify a subtree in the directory. 2 The scope of the search. Valid values are base , one , or sub . If this is left undefined, then a scope of sub is assumed. Descriptions of the scope options can be found in the table below. 3 The behavior of the search with respect to aliases in the LDAP tree. Valid values are never , search , base , or always . If this is left undefined, then the default is to always dereference aliases. Descriptions of the dereferencing behaviors can be found in the table below. 4 The time limit allowed for the search by the client, in seconds. A value of 0 imposes no client-side limit. 5 A valid LDAP search filter. If this is left undefined, then the default is (objectClass=*) . 6 The optional maximum size of response pages from the server, measured in LDAP entries. If set to 0 , no size restrictions will be made on pages of responses. Setting paging sizes is necessary when queries return more entries than the client or server allow by default. Table 15.1. LDAP search scope options LDAP search scope Description base Only consider the object specified by the base DN given for the query. one Consider all of the objects on the same level in the tree as the base DN for the query. sub Consider the entire subtree rooted at the base DN given for the query. Table 15.2. LDAP dereferencing behaviors Dereferencing behavior Description never Never dereference any aliases found in the LDAP tree. search Only dereference aliases found while searching. base Only dereference aliases while finding the base object. always Always dereference all aliases found in the LDAP tree. User-defined name mapping A user-defined name mapping explicitly maps the names of OpenShift Dedicated groups to unique identifiers that find groups on your LDAP server. The mapping uses normal YAML syntax. A user-defined mapping can contain an entry for every group in your LDAP server or only a subset of those groups. If there are groups on the LDAP server that do not have a user-defined name mapping, the default behavior during sync is to use the attribute specified as the OpenShift Dedicated group's name. User-defined name mapping groupUIDNameMapping: "cn=group1,ou=groups,dc=example,dc=com": firstgroup "cn=group2,ou=groups,dc=example,dc=com": secondgroup "cn=group3,ou=groups,dc=example,dc=com": thirdgroup 15.1.1. About the RFC 2307 configuration file The RFC 2307 schema requires you to provide an LDAP query definition for both user and group entries, as well as the attributes with which to represent them in the internal OpenShift Dedicated records. For clarity, the group you create in OpenShift Dedicated should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Dedicated group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: Note If using user-defined name mappings, your configuration file will differ. LDAP sync configuration that uses RFC 2307 schema: rfc2307_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 bindDN: cn=admin,dc=example,dc=com bindPassword: file: "/etc/secrets/bindPassword" rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The IP address and host of the LDAP server where this group's record is stored. 2 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 3 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute . For fine-grained filtering, use the whitelist / blacklist method. 4 The attribute to use as the name of the group. 5 The attribute on the group that stores the membership information. 6 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 7 The attribute to use as the name of the user in the OpenShift Dedicated group record. 15.1.2. About the Active Directory configuration file The Active Directory schema requires you to provide an LDAP query definition for user entries, as well as the attributes to represent them with in the internal OpenShift Dedicated group records. For clarity, the group you create in OpenShift Dedicated should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Dedicated group by their e-mail, but define the name of the group by the name of the group on the LDAP server. The following configuration file creates these relationships: LDAP sync configuration that uses Active Directory schema: active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2 1 The attribute to use as the name of the user in the OpenShift Dedicated group record. 2 The attribute on the user that stores the membership information. 15.1.3. About the augmented Active Directory configuration file The augmented Active Directory schema requires you to provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Dedicated group records. For clarity, the group you create in OpenShift Dedicated should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Dedicated group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships. LDAP sync configuration that uses augmented Active Directory schema: augmented_active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4 1 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 The attribute to use as the name of the group. 3 The attribute to use as the name of the user in the OpenShift Dedicated group record. 4 The attribute on the user that stores the membership information. 15.2. Running LDAP sync Once you have created a sync configuration file, you can begin to sync. OpenShift Dedicated allows administrators to perform a number of different sync types with the same server. 15.2.1. Syncing the LDAP server with OpenShift Dedicated You can sync all groups from the LDAP server with OpenShift Dedicated. Prerequisites Create a sync configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure To sync all groups from the LDAP server with OpenShift Dedicated: USD oc adm groups sync --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Dedicated group records. 15.2.2. Syncing OpenShift Dedicated groups with the LDAP server You can sync all groups already in OpenShift Dedicated that correspond to groups in the LDAP server specified in the configuration file. Prerequisites Create a sync configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure To sync OpenShift Dedicated groups with the LDAP server: USD oc adm groups sync --type=openshift --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Dedicated group records. 15.2.3. Syncing subgroups from the LDAP server with OpenShift Dedicated You can sync a subset of LDAP groups with OpenShift Dedicated using whitelist files, blacklist files, or both. Note You can use any combination of blacklist files, whitelist files, or whitelist literals. Whitelist and blacklist files must contain one unique group identifier per line, and you can include whitelist literals directly in the command itself. These guidelines apply to groups found on LDAP servers as well as groups already present in OpenShift Dedicated. Prerequisites Create a sync configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure To sync a subset of LDAP groups with OpenShift Dedicated, use any the following commands: USD oc adm groups sync --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --whitelist=<whitelist_file> \ --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --type=openshift \ --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Dedicated group records. 15.3. Running a group pruning job An administrator can also choose to remove groups from OpenShift Dedicated records if the records on the LDAP server that created them are no longer present. The prune job will accept the same sync configuration file and whitelists or blacklists as used for the sync job. For example: USD oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm 15.4. LDAP group sync examples This section contains examples for the RFC 2307, Active Directory, and augmented Active Directory schemas. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. 15.4.1. Syncing groups using the RFC 2307 schema For the RFC 2307 schema, the following examples synchronize a group named admins that has two members: Jane and Jim . The examples explain: How the group and users are added to the LDAP server. What the resulting group record in OpenShift Dedicated will be after synchronization. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. In the RFC 2307 schema, both users (Jane and Jim) and groups exist on the LDAP server as first-class entries, and group membership is stored in attributes on the group. The following snippet of ldif defines the users and group for this schema: LDAP entries that use RFC 2307 schema: rfc2307.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com 1 The group is a first-class entry in the LDAP server. 2 Members of a group are listed with an identifying reference as attributes on the group. Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the rfc2307_config.yaml file: USD oc adm groups sync --sync-config=rfc2307_config.yaml --confirm OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Dedicated group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 15.4.2. Syncing groups using the RFC2307 schema with user-defined name mappings When syncing groups with user-defined name mappings, the configuration file changes to contain these mappings as shown below. LDAP sync configuration that uses RFC 2307 schema with user-defined name mappings: rfc2307_config_user_defined.yaml kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: "cn=admins,ou=groups,dc=example,dc=com": Administrators 1 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The user-defined name mapping. 2 The unique identifier attribute that is used for the keys in the user-defined name mapping. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 3 The attribute to name OpenShift Dedicated groups with if their unique identifier is not in the user-defined name mapping. 4 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the rfc2307_config_user_defined.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the rfc2307_config_user_defined.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected] 1 The name of the group as specified by the user-defined name mapping. 15.4.3. Syncing groups using RFC 2307 with user-defined error tolerances By default, if the groups being synced contain members whose entries are outside of the scope defined in the member query, the group sync fails with an error: This often indicates a misconfigured baseDN in the usersQuery field. However, in cases where the baseDN intentionally does not contain some of the members of the group, setting tolerateMemberOutOfScopeErrors: true allows the group sync to continue. Out of scope members will be ignored. Similarly, when the group sync process fails to locate a member for a group, it fails outright with errors: This often indicates a misconfigured usersQuery field. However, in cases where the group contains member entries that are known to be missing, setting tolerateMemberNotFoundErrors: true allows the group sync to continue. Problematic members will be ignored. Warning Enabling error tolerances for the LDAP group sync causes the sync process to ignore problematic member entries. If the LDAP group sync is not configured correctly, this could result in synced OpenShift Dedicated groups missing members. LDAP entries that use RFC 2307 schema with problematic group membership: rfc2307_problematic_users.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2 1 A member that does not exist on the LDAP server. 2 A member that may exist, but is not under the baseDN in the user query for the sync job. To tolerate the errors in the above example, the following additions to your sync configuration file must be made: LDAP sync configuration that uses RFC 2307 schema tolerating errors: rfc2307_config_tolerating.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3 1 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 When true , the sync job tolerates groups for which some members were not found, and members whose LDAP entries are not found are ignored. The default behavior for the sync job is to fail if a member of a group is not found. 3 When true , the sync job tolerates groups for which some members are outside the user scope given in the usersQuery base DN, and members outside the member query scope are ignored. The default behavior for the sync job is to fail if a member of a group is out of scope. Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the rfc2307_config_tolerating.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected] 1 The users that are members of the group, as specified by the sync file. Members for which lookup encountered tolerated errors are absent. 15.4.4. Syncing groups using the Active Directory schema In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use Active Directory schema: active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins 1 The user's group memberships are listed as attributes on the user, and the group does not exist as an entry on the server. The memberOf attribute does not have to be a literal attribute on the user; in some LDAP servers, it is created during search and returned to the client, but not committed to the database. Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the active_directory_config.yaml file: USD oc adm groups sync --sync-config=active_directory_config.yaml --confirm OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Dedicated group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as listed in the LDAP server. 5 The users that are members of the group, named as specified by the sync file. 15.4.5. Syncing groups using the augmented Active Directory schema In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use augmented Active Directory schema: augmented_active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com 1 The user's group memberships are listed as attributes on the user. 2 The group is a first-class entry on the LDAP server. Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the augmented_active_directory_config.yaml file: USD oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the augmented_active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Dedicated group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 15.4.5.1. LDAP nested membership sync example Groups in OpenShift Dedicated do not nest. The LDAP server must flatten group membership before the data can be consumed. Microsoft's Active Directory Server supports this feature via the LDAP_MATCHING_RULE_IN_CHAIN rule, which has the OID 1.2.840.113556.1.4.1941 . Furthermore, only explicitly whitelisted groups can be synced when using this matching rule. This section has an example for the augmented Active Directory schema, which synchronizes a group named admins that has one user Jane and one group otheradmins as members. The otheradmins group has one user member: Jim . This example explains: How the group and users are added to the LDAP server. What the LDAP sync configuration file looks like. What the resulting group record in OpenShift Dedicated will be after synchronization. In the augmented Active Directory schema, both users ( Jane and Jim ) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user or the group. The following snippet of ldif defines the users and groups for this schema: LDAP entries that use augmented Active Directory schema with nested members: augmented_active_directory_nested.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com 1 2 5 The user's and group's memberships are listed as attributes on the object. 3 4 The groups are first-class entries on the LDAP server. 6 The otheradmins group is a member of the admins group. When syncing nested groups with Active Directory, you must provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Dedicated group records. Furthermore, certain changes are required in this configuration: The oc adm groups sync command must explicitly whitelist groups. The user's groupMembershipAttributes must include "memberOf:1.2.840.113556.1.4.1941:" to comply with the LDAP_MATCHING_RULE_IN_CHAIN rule. The groupUIDAttribute must be set to dn . The groupsQuery : Must not set filter . Must set a valid derefAliases . Should not set baseDN as that value is ignored. Should not set scope as that value is ignored. For clarity, the group you create in OpenShift Dedicated should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Dedicated group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: LDAP sync configuration that uses augmented Active Directory schema with nested members: augmented_active_directory_config_nested.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ "memberOf:1.2.840.113556.1.4.1941:" ] 5 1 groupsQuery filters cannot be specified. The groupsQuery base DN and scope values are ignored. groupsQuery must set a valid derefAliases . 2 The attribute that uniquely identifies a group on the LDAP server. It must be set to dn . 3 The attribute to use as the name of the group. 4 The attribute to use as the name of the user in the OpenShift Dedicated group record. mail or sAMAccountName are preferred choices in most installations. 5 The attribute on the user that stores the membership information. Note the use of LDAP_MATCHING_RULE_IN_CHAIN . Prerequisites Create the configuration file. You have access to the cluster as a user with the dedicated-admin role. Procedure Run the sync with the augmented_active_directory_config_nested.yaml file: USD oc adm groups sync \ 'cn=admins,ou=groups,dc=example,dc=com' \ --sync-config=augmented_active_directory_config_nested.yaml \ --confirm Note You must explicitly whitelist the cn=admins,ou=groups,dc=example,dc=com group. OpenShift Dedicated creates the following group record as a result of the above sync operation: OpenShift Dedicated group created by using the augmented_active_directory_config_nested.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Dedicated group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. Note that members of nested groups are included since the group membership was flattened by the Microsoft Active Directory Server. 15.5. LDAP sync configuration specification The object specification for the configuration file is below. Note that the different schema objects have different fields. For example, v1.ActiveDirectoryConfig has no groupsQuery field whereas v1.RFC2307Config and v1.AugmentedActiveDirectoryConfig both do. Important There is no support for binary attributes. All attribute data coming from the LDAP server must be in the format of a UTF-8 encoded string. For example, never use a binary attribute, such as objectGUID , as an ID attribute. You must use string attributes, such as sAMAccountName or userPrincipalName , instead. 15.5.1. v1.LDAPSyncConfig LDAPSyncConfig holds the necessary configuration options to define an LDAP group sync. Name Description Schema kind String value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds string apiVersion Defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#resources string url Host is the scheme, host and port of the LDAP server to connect to: scheme://host:port string bindDN Optional DN to bind to the LDAP server with. string bindPassword Optional password to bind with during the search phase. v1.StringSource insecure If true , indicates the connection should not use TLS. If false , ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . If you set insecure to true , you cannot use ldaps:// URL schemes. boolean ca Optional trusted certificate authority bundle to use when making requests to the server. If empty, the default system roots are used. string groupUIDNameMapping Optional direct mapping of LDAP group UIDs to OpenShift Dedicated group names. object rfc2307 Holds the configuration for extracting data from an LDAP server set up in a fashion similar to RFC2307: first-class group and user entries, with group membership determined by a multi-valued attribute on the group entry listing its members. v1.RFC2307Config activeDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory: first-class user entries, with group membership determined by a multi-valued attribute on members listing groups they are a member of. v1.ActiveDirectoryConfig augmentedActiveDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory as described above, with one addition: first-class group entries exist and are used to hold metadata but not group membership. v1.AugmentedActiveDirectoryConfig 15.5.2. v1.StringSource StringSource allows specifying a string inline, or externally via environment variable or file. When it contains only a string value, it marshals to a simple JSON string. Name Description Schema value Specifies the cleartext value, or an encrypted value if keyFile is specified. string env Specifies an environment variable containing the cleartext value, or an encrypted value if the keyFile is specified. string file References a file containing the cleartext value, or an encrypted value if a keyFile is specified. string keyFile References a file containing the key to use to decrypt the value. string 15.5.3. v1.LDAPQuery LDAPQuery holds the options necessary to build an LDAP query. Name Description Schema baseDN DN of the branch of the directory where all searches should start from. string scope The optional scope of the search. Can be base : only the base object, one : all objects on the base level, sub : the entire subtree. Defaults to sub if not set. string derefAliases The optional behavior of the search with regards to aliases. Can be never : never dereference aliases, search : only dereference in searching, base : only dereference in finding the base object, always : always dereference. Defaults to always if not set. string timeout Holds the limit of time in seconds that any request to the server can remain outstanding before the wait for a response is given up. If this is 0 , no client-side limit is imposed. integer filter A valid LDAP search filter that retrieves all relevant entries from the LDAP server with the base DN. string pageSize Maximum preferred page size, measured in LDAP entries. A page size of 0 means no paging will be done. integer 15.5.4. v1.RFC2307Config RFC2307Config holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the RFC2307 schema. Name Description Schema groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Dedicated group. string array groupMembershipAttributes Defines which attributes on an LDAP group entry will be interpreted as its members. The values contained in those attributes must be queryable by your UserUIDAttribute . string array usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userUIDAttribute Defines which attribute on an LDAP user entry will be interpreted as its unique identifier. It must correspond to values that will be found from the GroupMembershipAttributes . string userNameAttributes Defines which attributes on an LDAP user entry will be used, in order, as its OpenShift Dedicated user name. The first attribute with a non-empty value is used. This should match your PreferredUsername setting for your LDAPPasswordIdentityProvider . The attribute to use as the name of the user in the OpenShift Dedicated group record. mail or sAMAccountName are preferred choices in most installations. string array tolerateMemberNotFoundErrors Determines the behavior of the LDAP sync job when missing user entries are encountered. If true , an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If false , the LDAP sync job will fail if a query for users does not find any. The default value is false . Misconfigured LDAP sync jobs with this flag set to true can cause group membership to be removed, so it is recommended to use this flag with caution. boolean tolerateMemberOutOfScopeErrors Determines the behavior of the LDAP sync job when out-of-scope user entries are encountered. If true , an LDAP query for a user that falls outside of the base DN given for the all user query will be tolerated and only an error will be logged. If false , the LDAP sync job will fail if a user query would search outside of the base DN specified by the all user query. Misconfigured LDAP sync jobs with this flag set to true can result in groups missing users, so it is recommended to use this flag with caution. boolean 15.5.5. v1.ActiveDirectoryConfig ActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Dedicated user name. The attribute to use as the name of the user in the OpenShift Dedicated group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array 15.5.6. v1.AugmentedActiveDirectoryConfig AugmentedActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the augmented Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Dedicated user name. The attribute to use as the name of the user in the OpenShift Dedicated group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Dedicated group. string array
[ "url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5", "baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6", "groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 bindDN: cn=admin,dc=example,dc=com bindPassword: file: \"/etc/secrets/bindPassword\" rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4", "oc adm groups sync --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --sync-config=config.yaml --confirm", "oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=rfc2307_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3", "oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins", "oc adm groups sync --sync-config=active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5", "oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/ldap-syncing
7.29. cpufrequtils
7.29. cpufrequtils 7.29.1. RHBA-2014:2015 - cpufrequtils bug fix update Updated cpufrequtils packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The cpufrequtils packages contain utilities that can be used to control the cpufreq interface provided by the kernel on hardware that supports CPU frequency scaling. Bug Fix BZ# 728999 Previously, the debug options in the package build scripts were disabled. Consequently, the debuginfo packages were not generated for the cpufrequtils utility. With this update, the debug options in the build scripts have been enabled, and debuginfo options are now available for cpufrequtils binary files. Enhancement BZ# 730304 Prior to this update, the cpufreq-aperf utility was missing man pages. To provide the user with more information on cpufreq-aperf, the man pages have been added. Users of cpufrequtils are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-cpufrequtils
Chapter 4. Tracing
Chapter 4. Tracing 4.1. Tracing requests Distributed tracing records the path of a request through the various services that make up an application. It is used to tie information about different units of work together, to understand a whole chain of events in a distributed transaction. The units of work might be executed in different processes or hosts. 4.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 4.1.2. Additional resources for OpenShift Container Platform Red Hat OpenShift distributed tracing architecture Installing distributed tracing 4.2. Using Red Hat OpenShift distributed tracing You can use Red Hat OpenShift distributed tracing with OpenShift Serverless to monitor and troubleshoot serverless applications. 4.2.1. Using Red Hat OpenShift distributed tracing to enable distributed tracing Red Hat OpenShift distributed tracing is made up of several components that work together to collect, store, and display tracing data. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have not yet installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing. These must be installed after the Red Hat OpenShift distributed tracing installation. You have installed Red Hat OpenShift distributed tracing by following the OpenShift Container Platform "Installing distributed tracing" documentation. You have installed the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create an OpenTelemetryCollector custom resource (CR): Example OpenTelemetryCollector CR apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging] Verify that you have two pods running in the namespace where Red Hat OpenShift distributed tracing is installed: USD oc get pods -n <namespace> Example output NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m Verify that the following headless services have been created: USD oc get svc -n <namespace> | grep headless Example output cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m These services are used to configure Jaeger, Knative Serving, and Knative Eventing. The name of the Jaeger service may vary. Install the OpenShift Serverless Operator by following the "Installing the OpenShift Serverless Operator" documentation. Install Knative Serving by creating the following KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: "zipkin" zipkin-endpoint: "http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans" debug: "false" sample-rate: "0.1" 1 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. Install Knative Eventing by creating the following KnativeEventing CR: Example KnativeEventing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: backend: "zipkin" zipkin-endpoint: "http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans" debug: "false" sample-rate: "0.1" 1 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. Create a Knative service: Example service apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: "1" autoscaling.knative.dev/target: "1" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: "200m" env: - name: TARGET value: "Go Sample v1" Make some requests to the service: Example HTTPS request USD curl https://helloworld-go.example.com Get the URL for the Jaeger web console: Example command USD oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace> You can now examine traces by using the Jaeger console. 4.3. Using Jaeger distributed tracing If you do not want to install all of the components of Red Hat OpenShift distributed tracing, you can still use distributed tracing on OpenShift Container Platform with OpenShift Serverless. 4.3.1. Configuring Jaeger to enable distributed tracing To enable distributed tracing using Jaeger, you must install and configure Jaeger as a standalone integration. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing. You have installed the Red Hat OpenShift distributed tracing platform Operator. You have installed the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Procedure Create and apply a Jaeger custom resource (CR) that contains the following: Jaeger CR apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default Enable tracing for Knative Serving, by editing the KnativeServing CR and adding a YAML configuration for tracing: Tracing YAML example for Serving apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: "0.1" 1 backend: zipkin 2 zipkin-endpoint: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans" 3 debug: "false" 4 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. 2 backend must be set to zipkin . 3 The zipkin-endpoint must point to your jaeger-collector service endpoint. To get this endpoint, substitute the namespace where the Jaeger CR is applied. 4 Debugging should be set to false . Enabling debug mode by setting debug: "true" allows all spans to be sent to the server, bypassing sampling. Enable tracing for Knative Eventing by editing the KnativeEventing CR: Tracing YAML example for Eventing apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: sample-rate: "0.1" 1 backend: zipkin 2 zipkin-endpoint: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans" 3 debug: "false" 4 1 The sample-rate defines sampling probability. Using sample-rate: "0.1" means that 1 in 10 traces are sampled. 2 Set backend to zipkin . 3 Point the zipkin-endpoint to your jaeger-collector service endpoint. To get this endpoint, substitute the namespace where the Jaeger CR is applied. 4 Debugging should be set to false . Enabling debug mode by setting debug: "true" allows all spans to be sent to the server, bypassing sampling. Verification You can access the Jaeger web console to see tracing data, by using the jaeger route. Get the jaeger route's hostname by entering the following command: USD oc get route jaeger -n default Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None Open the endpoint address in your browser to view the console.
[ "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging]", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m", "oc get svc -n <namespace> | grep headless", "cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"false\" sample-rate: \"0.1\" 1", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: \"1\" autoscaling.knative.dev/target: \"1\" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\"", "curl https://helloworld-go.example.com", "oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace>", "apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4", "oc get route jaeger -n default", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/observability/tracing
Chapter 35. KafkaClusterTemplate schema reference
Chapter 35. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Description statefulset The statefulset property has been deprecated. Support for StatefulSets was removed in AMQ Streams 2.5. This property is ignored. Template for Kafka StatefulSet . StatefulSetTemplate pod Template for Kafka Pods . PodTemplate bootstrapService Template for Kafka bootstrap Service . InternalServiceTemplate brokersService Template for Kafka broker Service . InternalServiceTemplate externalBootstrapService Template for Kafka external bootstrap Service . ResourceTemplate perPodService Template for Kafka per-pod Services used for access from outside of OpenShift. ResourceTemplate externalBootstrapRoute Template for Kafka external bootstrap Route . ResourceTemplate perPodRoute Template for Kafka per-pod Routes used for access from outside of OpenShift. ResourceTemplate externalBootstrapIngress Template for Kafka external bootstrap Ingress . ResourceTemplate perPodIngress Template for Kafka per-pod Ingress used for access from outside of OpenShift. ResourceTemplate persistentVolumeClaim Template for all Kafka PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for Kafka PodDisruptionBudget . PodDisruptionBudgetTemplate kafkaContainer Template for the Kafka broker container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate clusterCaCert Template for Secret with Kafka Cluster certificate public key. ResourceTemplate serviceAccount Template for the Kafka service account. ResourceTemplate jmxSecret Template for Secret of the Kafka Cluster JMX authentication. ResourceTemplate clusterRoleBinding Template for the Kafka ClusterRoleBinding. ResourceTemplate podSet Template for Kafka StrimziPodSet resource. ResourceTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclustertemplate-reference
18.3. Remote Management over TLS and SSL
18.3. Remote Management over TLS and SSL You can manage virtual machines using the TLS and SSL protocols. TLS and SSL provides greater scalability but is more complicated than SSH (refer to Section 18.2, "Remote Management with SSH" ). TLS and SSL is the same technology used by web browsers for secure connections. The libvirt management connection opens a TCP port for incoming connections, which is securely encrypted and authenticated based on x509 certificates. The following procedures provide instructions on creating and deploying authentication certificates for TLS and SSL management. Procedure 18.1. Creating a certificate authority (CA) key for TLS management Before you begin, confirm that gnutls-utils is installed. If not, install it: Generate a private key, using the following command: After the key is generated, create a signature file so the key can be self-signed. To do this, create a file with signature details and name it ca.info . This file should contain the following: Generate the self-signed key with the following command: After the file is generated, the ca.info file can be deleted using the rm command. The file that results from the generation process is named cacert.pem . This file is the public key (certificate). The loaded file cakey.pem is the private key. For security purposes, this file should be kept private, and not reside in a shared space. Install the cacert.pem CA certificate file on all clients and servers in the /etc/pki/CA/cacert.pem directory to let them know that the certificate issued by your CA can be trusted. To view the contents of this file, run: This is all that is required to set up your CA. Keep the CA's private key safe, as you will need it in order to issue certificates for your clients and servers. Procedure 18.2. Issuing a server certificate This procedure demonstrates how to issue a certificate with the X.509 Common Name (CN) field set to the host name of the server. The CN must match the host name which clients will be using to connect to the server. In this example, clients will be connecting to the server using the URI: qemu:// mycommonname /system , so the CN field should be identical, for this example "mycommoname". Create a private key for the server. Generate a signature for the CA's private key by first creating a template file called server.info . Make sure that the CN is set to be the same as the server's host name: Create the certificate: This results in two files being generated: serverkey.pem - The server's private key servercert.pem - The server's public key Make sure to keep the location of the private key secret. To view the contents of the file, use the following command: When opening this file, the CN= parameter should be the same as the CN that you set earlier. For example, mycommonname . Install the two files in the following locations: serverkey.pem - the server's private key. Place this file in the following location: /etc/pki/libvirt/private/serverkey.pem servercert.pem - the server's certificate. Install it in the following location on the server: /etc/pki/libvirt/servercert.pem Procedure 18.3. Issuing a client certificate For every client (that is to say any program linked with libvirt, such as virt-manager ), you need to issue a certificate with the X.509 Distinguished Name (DN) field set to a suitable name. This needs to be decided on a corporate level. For example purposes, the following information will be used: Create a private key: Generate a signature for the CA's private key by first creating a template file called client.info . The file should contain the following (fields should be customized to reflect your region/location): Sign the certificate with the following command: Install the certificates on the client machine:
[ "yum install gnutls-utils", "certtool --generate-privkey > cakey.pem", "cn = Name of your organization ca cert_signing_key", "certtool --generate-self-signed --load-privkey cakey.pem --template ca.info --outfile cacert.pem", "certtool -i --infile cacert.pem", "certtool --generate-privkey > serverkey.pem", "organization = Name of your organization cn = mycommonname tls_www_server encryption_key signing_key", "certtool --generate-certificate --load-privkey serverkey.pem --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \\ --template server.info --outfile servercert.pem", "certtool -i --infile servercert.pem", "C=USA,ST=North Carolina,L=Raleigh,O=Red Hat,CN=name_of_client", "certtool --generate-privkey > clientkey.pem", "country = USA state = North Carolina locality = Raleigh organization = Red Hat cn = client1 tls_www_client encryption_key signing_key", "certtool --generate-certificate --load-privkey clientkey.pem --load-ca-certificate cacert.pem \\ --load-ca-privkey cakey.pem --template client.info --outfile clientcert.pem", "cp clientkey.pem /etc/pki/libvirt/private/clientkey.pem cp clientcert.pem /etc/pki/libvirt/clientcert.pem" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Remote_management_of_guests-Remote_management_over_TLS_and_SSL
Chapter 1. Red Hat High Availability Add-On Configuration and Management Overview
Chapter 1. Red Hat High Availability Add-On Configuration and Management Overview Red Hat High Availability Add-On allows you to connect a group of computers (called nodes or members ) to work together as a cluster. You can use Red Hat High Availability Add-On to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS2 file system or setting up service failover). Note For information on best practices for deploying and upgrading Red Hat Enterprise Linux clusters using the High Availability Add-On and Red Hat Global File System 2 (GFS2) see the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on Red Hat Customer Portal at https://access.redhat.com/site/articles/40051 . This chapter provides a summary of documentation features and updates that have been added to the Red Hat High Availability Add-On since the initial release of Red Hat Enterprise Linux 6, followed by an overview of configuring and managing the Red Hat High Availability Add-On. 1.1. New and Changed Features This section lists new and changed features of the Red Hat High Availability Add-On documentation that have been added since the initial release of Red Hat Enterprise Linux 6. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 6.1 Red Hat Enterprise Linux 6.1 includes the following documentation and feature updates and changes. As of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Availability Add-On provides support for SNMP traps. For information on configuring SNMP traps with the Red Hat High Availability Add-On, see Chapter 11, SNMP Configuration with the Red Hat High Availability Add-On . As of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Availability Add-On provides support for the ccs cluster configuration command. For information on the ccs command, see Chapter 6, Configuring Red Hat High Availability Add-On With the ccs Command and Chapter 7, Managing Red Hat High Availability Add-On With ccs . The documentation for configuring and managing Red Hat High Availability Add-On software using Conga has been updated to reflect updated Conga screens and feature support. For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password the first time you propagate updated cluster configuration from any particular node. For information on ricci see Section 3.13, "Considerations for ricci " . You can now specify a Restart-Disable failure policy for a service, indicating that the system should attempt to restart the service in place if it fails, but if restarting the service fails the service will be disabled instead of being moved to another host in the cluster. This feature is documented in Section 4.10, "Adding a Cluster Service to the Cluster" and Appendix B, HA Resource Parameters . You can now configure an independent subtree as non-critical, indicating that if the resource fails then only that resource is disabled. For information on this feature see Section 4.10, "Adding a Cluster Service to the Cluster" and Section C.4, "Failure Recovery and Independent Subtrees" . This document now includes the new chapter Chapter 10, Diagnosing and Correcting Problems in a Cluster . In addition, small corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 6.2 Red Hat Enterprise Linux 6.2 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux now provides support for running Clustered Samba in an active/active configuration. For information on clustered Samba configuration, see Chapter 12, Clustered Samba Configuration . Any user able to authenticate on the system that is hosting luci can log in to luci . As of Red Hat Enterprise Linux 6.2, only the root user on the system that is running luci can access any of the luci components until an administrator (the root user or a user with administrator permission) sets permissions for that user. For information on setting luci permissions for users, see Section 4.3, "Controlling Access to luci" . The nodes in a cluster can communicate with each other using the UDP unicast transport mechanism. For information on configuring UDP unicast, see Section 3.12, "UDP Unicast Traffic" . You can now configure some aspects of luci 's behavior by means of the /etc/sysconfig/luci file. For example, you can specifically configure the only IP address luci is being served at. For information on configuring the only IP address luci is being served at, see Table 3.2, "Enabled IP Port on a Computer That Runs luci " . For information on the /etc/sysconfig/luci file in general, see Section 3.4, "Configuring luci with /etc/sysconfig/luci " . The ccs command now includes the --lsfenceopts option, which prints a list of available fence devices, and the --lsfenceopts fence_type option, which prints each available fence type. For information on these options, see Section 6.6, "Listing Fence Devices and Fence Device Options" . The ccs command now includes the --lsserviceopts option, which prints a list of cluster services currently available for your cluster, and the --lsserviceopts service_type option, which prints a list of the options you can specify for a particular service type. For information on these options, see Section 6.11, "Listing Available Cluster Services and Resources" . The Red Hat Enterprise Linux 6.2 release provides support for the VMware (SOAP Interface) fence agent. For information on fence device parameters, see Appendix A, Fence Device Parameters . The Red Hat Enterprise Linux 6.2 release provides support for the RHEV-M REST API fence agent, against RHEV 3.0 and later. For information on fence device parameters, see Appendix A, Fence Device Parameters . As of the Red Hat Enterprise Linux 6.2 release, when you configure a virtual machine in a cluster with the ccs command you can use the --addvm option (rather than the addservice option). This ensures that the vm resource is defined directly under the rm configuration node in the cluster configuration file. For information on configuring virtual machine resources with the ccs command, see Section 6.12, "Virtual Machine Resources" . This document includes a new appendix, Appendix D, Modifying and Enforcing Cluster Service Resource Actions . This appendix describes how rgmanager monitors the status of cluster resources, and how to modify the status check interval. The appendix also describes the __enforce_timeouts service parameter, which indicates that a timeout for an operation should cause a service to fail. This document includes a new section, Section 3.3.3, "Configuring the iptables Firewall to Allow Cluster Components" . This section shows the filtering you can use to allow multicast traffic through the iptables firewall for the various cluster components. In addition, small corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 6.3 Red Hat Enterprise Linux 6.3 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.3 release provides support for the condor resource agent. For information on HA resource parameters, see Appendix B, HA Resource Parameters . This document includes a new appendix, Appendix F, High Availability LVM (HA-LVM) . Information throughout this document clarifies which configuration changes require a cluster restart. For a summary of these changes, see Section 10.1, "Configuration Changes Do Not Take Effect" . The documentation now notes that there is an idle timeout for luci that logs you out after 15 minutes of inactivity. For information on starting luci , see Section 4.2, "Starting luci " . The fence_ipmilan fence device supports a privilege level parameter. For information on fence device parameters, see Appendix A, Fence Device Parameters . This document includes a new section, Section 3.14, "Configuring Virtual Machines in a Clustered Environment" . This document includes a new section, Section 5.6, "Backing Up and Restoring the luci Configuration" . This document includes a new section, Section 10.4, "Cluster Daemon crashes" . This document provides information on setting debug options in Section 6.14.4, "Logging" , Section 8.7, "Configuring Debug Options" , and Section 10.13, "Debug Logging for Distributed Lock Manager (DLM) Needs to be Enabled" . As of Red Hat Enterprise Linux 6.3, the root user or a user who has been granted luci administrator permissions can also use the luci interface to add users to the system, as described in Section 4.3, "Controlling Access to luci" . As of the Red Hat Enterprise Linux 6.3 release, the ccs command validates the configuration according to the cluster schema at /usr/share/cluster/cluster.rng on the node that you specify with the -h option. Previously the ccs command always used the cluster schema that was packaged with the ccs command itself, /usr/share/ccs/cluster.rng on the local system. For information on configuration validation, see Section 6.1.6, "Configuration Validation" . The tables describing the fence device parameters in Appendix A, Fence Device Parameters and the tables describing the HA resource parameters in Appendix B, HA Resource Parameters now include the names of those parameters as they appear in the cluster.conf file. In addition, small corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 6.4 Red Hat Enterprise Linux 6.4 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.4 release provides support for the Eaton Network Power Controller (SNMP Interface) fence agent, the HP BladeSystem fence agent, and the IBM iPDU fence agent. For information on fence device parameters, see Appendix A, Fence Device Parameters . Appendix B, HA Resource Parameters now provides a description of the NFS Server resource agent. As of Red Hat Enterprise Linux 6.4, the root user or a user who has been granted luci administrator permissions can also use the luci interface to delete users from the system. This is documented in Section 4.3, "Controlling Access to luci" . Appendix B, HA Resource Parameters provides a description of the new nfsrestart parameter for the Filesystem and GFS2 HA resources. This document includes a new section, Section 6.1.5, "Commands that Overwrite Settings" . Section 3.3, "Enabling IP Ports" now includes information on filtering the iptables firewall for igmp . The IPMI LAN fence agent now supports a parameter to configure the privilege level on the IPMI device, as documented in Appendix A, Fence Device Parameters . In addition to Ethernet bonding mode 1, bonding modes 0 and 2 are now supported for inter-node communication in a cluster. Troubleshooting advice in this document that suggests you ensure that you are using only supported bonding modes now notes this. VLAN-tagged network devices are now supported for cluster heartbeat communication. Troubleshooting advice indicating that this is not supported has been removed from this document. The Red Hat High Availability Add-On now supports the configuration of redundant ring protocol. For general information on using this feature and configuring the cluster.conf configuration file, see Section 8.6, "Configuring Redundant Ring Protocol" . For information on configuring redundant ring protocol with luci , see Section 4.5.4, "Configuring Redundant Ring Protocol" . For information on configuring redundant ring protocol with the ccs command, see Section 6.14.5, "Configuring Redundant Ring Protocol" . In addition, small corrections and clarifications have been made throughout the document. 1.1.5. New and Changed Features for Red Hat Enterprise Linux 6.5 Red Hat Enterprise Linux 6.5 includes the following documentation and feature updates and changes. This document includes a new section, Section 8.8, "Configuring nfsexport and nfsserver Resources" . The tables of fence device parameters in Appendix A, Fence Device Parameters have been updated to reflect small updates to the luci interface. In addition, many small corrections and clarifications have been made throughout the document. 1.1.6. New and Changed Features for Red Hat Enterprise Linux 6.6 Red Hat Enterprise Linux 6.6 includes the following documentation and feature updates and changes. The tables of fence device parameters in Appendix A, Fence Device Parameters have been updated to reflect small updates to the luci interface. The tables of resource agent parameters in Appendix B, HA Resource Parameters have been updated to reflect small updates to the luci interface. Table B.3, "Bind Mount ( bind-mount Resource) (Red Hat Enterprise Linux 6.6 and later)" documents the parameters for the Bind Mount resource agent. As of Red Hat Enterprise Linux 6.6 release, you can use the --noenable option of the ccs --startall command to prevent cluster services from being enabled, as documented in Section 7.2, "Starting and Stopping a Cluster" Table A.26, "Fence kdump" documents the parameters for the kdump fence agent. As of the Red Hat Enterprise Linux 6.6 release, you can sort the columns in a resource list on the luci display by clicking on the header for the sort category, as described in Section 4.9, "Configuring Global Cluster Resources" . In addition, many small corrections and clarifications have been made throughout the document. 1.1.7. New and Changed Features for Red Hat Enterprise Linux 6.7 Red Hat Enterprise Linux 6.7 includes the following documentation and feature updates and changes. This document now includes a new chapter, Chapter 2, Getting Started: Overview , which provides a summary procedure for setting up a basic Red Hat High Availability cluster. Appendix A, Fence Device Parameters now includes a table listing the parameters for the Emerson Network Power Switch (SNMP interface). Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_xvm fence agent, titled as "Fence virt (Multicast Mode"). The table listing the parameters for the fence_virt fence agent is now titled "Fence virt ((Serial/VMChannel Mode)". Both tables have been updated to reflect the luci display. Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_xvm fence agent, titled as "Fence virt (Multicast Mode"). The table listing the parameters for the fence_virt fence agent is now titled "Fence virt ((Serial/VMChannel Mode)". Both tables have been updated to reflect the luci display. The troubleshooting procedure described in Section 10.10, "Quorum Disk Does Not Appear as Cluster Member" has been updated. In addition, many small corrections and clarifications have been made throughout the document. 1.1.8. New and Changed Features for Red Hat Enterprise Linux 6.8 Red Hat Enterprise Linux 6.8 includes the following documentation and feature updates and changes. Appendix A, Fence Device Parameters now includes a table listing the parameters for the fence_mpath fence agent, titled as "Multipath Persistent Reservation Fencing". The table listing the parameters for the fence_ipmilan , fence_idrac , fence_imm , fence_ilo3 , and fence_ilo4 fence agents has been updated to reflect the luci display. Section F.3, "Creating New Logical Volumes for an Existing Cluster" now provides a procedure for creating new logical volumes in an existing cluster when using HA-LVM. 1.1.9. New and Changed Features for Red Hat Enterprise Linux 6.9 Red Hat Enterprise Linux 6.9 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux 6.9, after you have entered a node name on the luci Create New Cluster dialog box or the Add Existing Cluster screen, the fingerprint of the certificate of the ricci host is displayed for confirmation, as described in Section 4.4, "Creating a Cluster" and Section 5.1, "Adding an Existing Cluster to the luci Interface" . Similarly, the fingerprint of the certificate of the ricci host is displayed for confirmation when you add a new node to a running cluster, as described in Section 5.3.3, "Adding a Member to a Running Cluster" . The luci Service Groups display for a selected service group now includes a table showing the actions that have been configured for each resource in that service group. For information on resource actions, see Appendix D, Modifying and Enforcing Cluster Service Resource Actions .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-overview-CA
Chapter 139. Vert.x HTTP Client
Chapter 139. Vert.x HTTP Client Since Camel 3.5 Only producer is supported The Vert.x HTTP component provides the capability to produce messages to HTTP endpoints via the Vert.x Web Client . 139.1. Dependencies When using vertx-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-http-starter</artifactId> </dependency> 139.2. URI format 139.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 139.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 139.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 139.4. Component Options The Vert.x HTTP Client component supports 19 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean responsePayloadAsByteArray (producer) Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true boolean allowJavaSerializedObject (advanced) Whether to allow java serialization when a request has the Content-Type application/x-java-serialized-object This is disabled by default. If you enable this, be aware that Java will deserialize the incoming data from the request. This can be a potential security risk. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean vertx (advanced) To use an existing vertx instead of creating a new instance. Vertx vertxHttpBinding (advanced) A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. VertxHttpBinding vertxOptions (advanced) To provide a custom set of vertx options for configuring vertx. VertxOptions webClientOptions (advanced) To provide a custom set of options for configuring vertx web client. WebClientOptions headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy proxyHost (proxy) The proxy server host address. String proxyPassword (proxy) The proxy server password if authentication is required. String proxyPort (proxy) The proxy server port. Integer proxyType (proxy) The proxy server type. Enum values: HTTP SOCKS4 SOCKS5 ProxyType proxyUsername (proxy) The proxy server username if authentication is required. String basicAuthPassword (security) The password to use for basic authentication. String basicAuthUsername (security) The user name to use for basic authentication. String bearerToken (security) The bearer token to use for bearer token authentication. String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 139.5. Endpoint Options The Vert.x HTTP Client endpoint is configured using URI syntax: with the following path and query parameters: 139.5.1. Path Parameters (1 parameters) Name Description Default Type httpUri (producer) Required The HTTP URI to connect to. URI 139.5.2. Query Parameters (23 parameters) Name Description Default Type connectTimeout (producer) The amount of time in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. 60000 int cookieStore (producer) A custom CookieStore to use when session management is enabled. If this option is not set then an in-memory CookieStore is used. InMemoryCookieStore CookieStore headerFilterStrategy (producer) A custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. VertxHttpHeaderFilterStrategy HeaderFilterStrategy httpMethod (producer) The HTTP method to use. The HttpMethod header cannot override this option if set. HttpMethod okStatusCodeRange (producer) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. 200-299 String responsePayloadAsByteArray (producer) Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true boolean sessionManagement (producer) Enables session management via WebClientSession. By default the client is configured to use an in-memory CookieStore. The cookieStore option can be used to override this. false boolean throwExceptionOnFailure (producer) Disable throwing HttpOperationFailedException in case of failed responses from the remote server. true boolean timeout (producer) The amount of time in milliseconds after which if the request does not return any data within the timeout period a TimeoutException fails the request. Setting zero or a negative value disables the timeout. -1 long transferException (producer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was sent back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Camel will deserialize the incoming data from the request to a Java object, which can be a potential security risk. false boolean useCompression (producer) Set whether compression is enabled to handled compressed (E.g gzipped) responses. false boolean vertxHttpBinding (producer) A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. VertxHttpBinding webClientOptions (producer) Sets customized options for configuring the Vert.x WebClient. WebClientOptions lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean proxyHost (proxy) The proxy server host address. String proxyPassword (proxy) The proxy server password if authentication is required. String proxyPort (proxy) The proxy server port. Integer proxyType (proxy) The proxy server type. Enum values: HTTP SOCKS4 SOCKS5 ProxyType proxyUsername (proxy) The proxy server username if authentication is required. String basicAuthPassword (security) The password to use for basic authentication. String basicAuthUsername (security) The user name to use for basic authentication. String bearerToken (security) The bearer token to use for bearer token authentication. String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters 139.6. Message Headers The Vert.x HTTP Client component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelHttpMethod (producer) Constant: HTTP_METHOD The http method. HttpMethod CamelHttpResponseCode (producer) Constant: HTTP_RESPONSE_CODE The HTTP response code from the external server. Integer CamelHttpResponseText (producer) Constant: HTTP_RESPONSE_TEXT The HTTP response text from the external server. String Content-Type (producer) Constant: CONTENT_TYPE The HTTP content type. Is set on both the IN and OUT message to provide a content type, such as text/html. String CamelHttpQuery (producer) Constant: HTTP_QUERY URI parameters. Will override existing URI parameters set directly on the endpoint. String CamelHttpUri (producer) Constant: HTTP_URI URI to call. Will override the existing URI set directly on the endpoint. This URI is the URI of the http server to call. Its not the same as the Camel endpoint URI, where you can configure endpoint options such as security etc. This header does not support that, its only the URI of the http server. String CamelHttpPath (producer) Constant: HTTP_PATH Request URI's path, the header will be used to build the request URI with the HTTP_URI. String Content-Encoding (producer) Constant: CONTENT_ENCODING The HTTP content encoding. Is set to provide a content encoding, such as gzip. String 139.7. Usage The following example shows how to send a request to an HTTP endpoint. You can override the URI configured on the vertx-http producer via headers Exchange.HTTP_URI and Exchange.HTTP_PATH . from("direct:start") .to("vertx-http:https://camel.apache.org"); 139.8. URI Parameters The vertx-http producer supports URI parameters to be sent to the HTTP server. The URI parameters can either be set directly on the endpoint URI, or as a header with the key Exchange.HTTP_QUERY on the message. 139.9. Response code Camel will handle according to the HTTP response code: Response code is in the range 100..299, Camel regards it as a success response. Response code is in the range 300..399, Camel regards it as a redirection response and will throw a HttpOperationFailedException with the information. Response code is 400+, Camel regards it as an external server failure and will throw a HttpOperationFailedException with the information. 139.10. throwExceptionOnFailure The option, throwExceptionOnFailure , can be set to false to prevent the HttpOperationFailedException from being thrown for failed response codes. This allows you to get any response from the remote server. 139.11. Exceptions HttpOperationFailedException exception contains the following information: The HTTP status code The HTTP status line (text of the status code) Redirect location, if server returned a redirect Response body as a java.lang.String , if server provided a body as response 139.12. HTTP method The following algorithm determines the HTTP method to be used: Use method provided as endpoint configuration ( httpMethod ). Use method provided in header ( Exchange.HTTP_METHOD ). GET if query string is provided in header. GET if endpoint is configured with a query string. POST if there is data to send (body is not null ). GET otherwise. 139.13. HTTP form parameters You can send HTTP form parameters in one of two ways. Set the Exchange.CONTENT_TYPE header to the value application/x-www-form-urlencoded and ensure the message body is a String formatted as form variables. For example, param1=value1&param2=value2 . Set the message body as a MultiMap which allows you to configure form parameter names and values. 139.14. Multipart form data You can upload text or binary files by setting the message body as a MultipartForm . 139.15. Customizing Vert.x Web Client options When finer control of the Vert.x Web Client configuration is required, you can bind a custom WebClientOptions instance to the registry. WebClientOptions options = new WebClientOptions().setMaxRedirects(5) .setIdleTimeout(10) .setConnectTimeout(3); camelContext.getRegistry.bind("clientOptions", options); Then reference the options on the vertx-http producer. from("direct:start") .to("vertx-http:http://localhost:8080?webClientOptions=#clientOptions") 139.15.1. SSL The Vert.x HTTP component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . It is also possible to configure SSL options by providing a custom WebClientOptions . 139.16. Session Management Session management can be enabled via the sessionManagement URI option. When enabled, an in-memory cookie store is used to track cookies. This can be overridden by providing a custom CookieStore via the cookieStore URI option. 139.17. Spring Boot Auto-Configuration The component supports 20 options, which are listed below. Name Description Default Type camel.component.vertx-http.allow-java-serialized-object Whether to allow java serialization when a request has the Content-Type application/x-java-serialized-object This is disabled by default. If you enable this, be aware that Java will deserialize the incoming data from the request. This can be a potential security risk. false Boolean camel.component.vertx-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.vertx-http.basic-auth-password The password to use for basic authentication. String camel.component.vertx-http.basic-auth-username The user name to use for basic authentication. String camel.component.vertx-http.bearer-token The bearer token to use for bearer token authentication. String camel.component.vertx-http.enabled Whether to enable auto configuration of the vertx-http component. This is enabled by default. Boolean camel.component.vertx-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.vertx-http.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.vertx-http.proxy-host The proxy server host address. String camel.component.vertx-http.proxy-password The proxy server password if authentication is required. String camel.component.vertx-http.proxy-port The proxy server port. Integer camel.component.vertx-http.proxy-type The proxy server type. ProxyType camel.component.vertx-http.proxy-username The proxy server username if authentication is required. String camel.component.vertx-http.response-payload-as-byte-array Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true Boolean camel.component.vertx-http.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.vertx-http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.vertx-http.vertx To use an existing vertx instead of creating a new instance. The option is a io.vertx.core.Vertx type. Vertx camel.component.vertx-http.vertx-http-binding A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. The option is a org.apache.camel.component.vertx.http.VertxHttpBinding type. VertxHttpBinding camel.component.vertx-http.vertx-options To provide a custom set of vertx options for configuring vertx. The option is a io.vertx.core.VertxOptions type. VertxOptions camel.component.vertx-http.web-client-options To provide a custom set of options for configuring vertx web client. The option is a io.vertx.ext.web.client.WebClientOptions type. WebClientOptions
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-http-starter</artifactId> </dependency>", "vertx-http:hostname[:port][/resourceUri][?options]", "vertx-http:httpUri", "from(\"direct:start\") .to(\"vertx-http:https://camel.apache.org\");", "WebClientOptions options = new WebClientOptions().setMaxRedirects(5) .setIdleTimeout(10) .setConnectTimeout(3); camelContext.getRegistry.bind(\"clientOptions\", options);", "from(\"direct:start\") .to(\"vertx-http:http://localhost:8080?webClientOptions=#clientOptions\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-vertx-http-component-starter
Chapter 12. Upgrading
Chapter 12. Upgrading For version upgrades, the Red Hat build of OpenTelemetry Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Red Hat build of OpenTelemetry Operator is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator's new version. 12.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/dist-tracing-otel-updating
Part VI. Reference material
Part VI. Reference material
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/reference_material
21.3. Installation Network Parameters
21.3. Installation Network Parameters The following parameters can be used to set up the preliminary network automatically and can be defined in the CMS configuration file. The parameters in this section are the only parameters that can also be used in a CMS configuration file. All other parameters in other sections must be specified in the parameter file. NETTYPE=" type " Where type must be one of the following: qeth , lcs , or ctc . The default is qeth . Choose lcs for: OSA-2 Ethernet/Token Ring OSA-Express Fast Ethernet in non-QDIO mode OSA-Express High Speed Token Ring in non-QDIO mode Gigabit Ethernet in non-QDIO mode Choose qeth for: OSA-Express Fast Ethernet Gigabit Ethernet (including 1000Base-T) High Speed Token Ring HiperSockets ATM (running Ethernet LAN emulation) SUBCHANNELS=" device_bus_IDs " Where device_bus_IDs is a comma-separated list of two or three device bus IDs. The IDs must be specified in lowercase. Provides required device bus IDs for the various network interfaces: For example (a sample qeth SUBCHANNEL statement): PORTNAME=" osa_portname " , PORTNAME=" lcs_portnumber " This variable supports OSA devices operating in qdio mode or in non-qdio mode. When using qdio mode ( NETTYPE="qeth" ), osa_portname is the portname specified on the OSA device when operating in qeth mode. When using non-qdio mode ( NETTYPE="lcs" ), lcs_portnumber is used to pass the relative port number as a decimal integer in the range of 0 through 15. PORTNO=" portnumber " You can add either PORTNO="0" (to use port 0) or PORTNO="1" (to use port 1 of OSA features with two ports per CHPID) to the CMS configuration file to avoid being prompted for the mode. LAYER2=" value " Where value can be 0 or 1 . Use LAYER2="0" to operate an OSA or HiperSockets device in layer 3 mode ( NETTYPE="qeth" ). Use LAYER2="1" for layer 2 mode. For virtual network devices under z/VM this setting must match the definition of the GuestLAN or VSWITCH to which the device is coupled. To use network services that operate on layer 2 (the Data Link Layer or its MAC sublayer) such as DHCP, layer 2 mode is a good choice. The qeth device driver default for OSA devices is now layer 2 mode. To continue using the default of layer 3 mode, set LAYER2="0" explicitly. VSWITCH=" value " Where value can be 0 or 1 . Specify VSWITCH="1" when connecting to a z/VM VSWITCH or GuestLAN, or VSWITCH="0" (or nothing at all) when using directly attached real OSA or directly attached real HiperSockets. MACADDR=" MAC_address " If you specify LAYER2="1" and VSWITCH="0" , you can optionally use this parameter to specify a MAC address. Linux requires six colon-separated octets as pairs lower case hex digits - for example, MACADDR=62:a3:18:e7:bc:5f . Note that this is different from the notation used by z/VM. If you specify LAYER2="1" and VSWITCH="1" , you must not specify the MACADDR , because z/VM assigns a unique MAC address to virtual network devices in layer 2 mode. CTCPROT=" value " Where value can be 0 , 1 , or 3 . Specifies the CTC protocol for NETTYPE="ctc" . The default is 0 . HOSTNAME=" string " Where string is the host name of the newly-installed Linux instance. IPADDR=" IP " Where IP is the IP address of the new Linux instance. NETMASK=" netmask " Where netmask is the netmask. The netmask supports the syntax of a prefix integer (from 1 to 32) as specified in IPv4 classless interdomain routing (CIDR). For example, you can specify 24 instead of 255.255.255.0 , or 20 instead of 255.255.240.0 . GATEWAY=" gw " Where gw is the gateway IP address for this network device. MTU=" mtu " Where mtu is the Maximum Transmission Unit (MTU) for this network device. DNS=" server1 : server2 : additional_server_terms : serverN " Where " server1 : server2 : additional_server_terms : serverN " is a list of DNS servers, separated by colons. For example: SEARCHDNS=" domain1 : domain2 : additional_dns_terms : domainN " Where " domain1 : domain2 : additional_dns_terms : domainN " is a list of the search domains, separated by colons. For example: You only need to specify SEARCHDNS= if you specify the DNS= parameter. DASD= Defines the DASD or range of DASDs to configure for the installation. The installation program supports a comma-separated list of device bus IDs or of ranges of device bus IDs with the optional attributes ro , diag , erplog , and failfast . Optionally, you can abbreviate device bus IDs to device numbers with leading zeros stripped. Any optional attributes should be separated by colons and enclosed in parentheses. Optional attributes follow a device bus ID or a range of device bus IDs. The only supported global option is autodetect . This does not support the specification of non-existent DASDs to reserve kernel device names for later addition of DASDs. Use persistent DASD device names (for example /dev/disk/by-path/... ) to enable transparent addition of disks later. Other global options such as probeonly , nopav , or nofcx are not supported by the installation program. Only specify those DASDs that you really need to install your system. All unformatted DASDs specified here must be formatted after a confirmation later on in the installation program (see Section 18.16.1.1, "DASD Low-level Formatting" ). Add any data DASDs that are not needed for the root file system or the /boot partition after installation as described in Section 20.1.3.2, "DASDs That Are Not Part of the Root File System" . For example: For FCP-only environments, remove the DASD= option from the CMS configuration file to indicate no DASD is present. FCP_ n =" device_bus_ID WWPN FCP_LUN " Where: n is typically an integer value (for example FCP_1 or FCP_2 ) but could be any string with alphabetic or numeric characters or underscores. device_bus_ID specifies the device bus ID of the FCP device representing the host bus adapter (HBA) (for example 0.0.fc00 for device fc00). WWPN is the world wide port name used for routing (often in conjunction with multipathing) and is as a 16-digit hex value (for example 0x50050763050b073d ). FCP_LUN refers to the storage logical unit identifier and is specified as a 16-digit hexadecimal value padded with zeroes to the right (for example 0x4020400100000000 ). These variables can be used on systems with FCP devices to activate FCP LUNs such as SCSI disks. Additional FCP LUNs can be activated during the installation interactively or by means of a Kickstart file. An example value looks similar to the following: Important Each of the values used in the FCP parameters (for example FCP_1 or FCP_2 ) are site-specific and are normally supplied by the FCP storage administrator. The installation program prompts you for any required parameters not specified in the parameter or configuration file except for FCP_n.
[ "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-parameter-configuration-files-network-s390
Chapter 9. Detecting Dead Connections
Chapter 9. Detecting Dead Connections Sometimes clients stop unexpectedly and do not have a chance to clean up their resources. If this occurs, it can leave resources in a faulty state and result in the broker running out of memory or other system resources. The broker detects that a client's connection was not properly shut down at garbage collection time. The connection is then closed and a message similar to the one below is written to the log. The log captures the exact line of code where the client session was instantiated. This enables you to identify the error and correct it. 1 The line in the client code where the connection was instantiated. 9.1. Connection Time-To-Live Because the network connection between the client and the server can fail and then come back online, allowing a client to reconnect, AMQ Broker waits to clean up inactive server-side resources. This wait period is called a time-to-live (TTL). The default TTL for a network-based connection is 60000 milliseconds (1 minute). The default TTL on an in-VM connection is -1 , which means the broker never times out the connection on the broker side. Configuring Time-To-Live on the Broker If you do not want clients to specify their own connection TTL, you can set a global value on the broker side. This can be done by specifying the connection-ttl-override element in the broker configuration. The logic to check connections for TTL violations runs periodically on the broker, as determined by the connection-ttl-check-interval element. Procedure Edit <broker_instance_dir> /etc/broker.xml by adding the connection-ttl-override configuration element and providing a value for the time-to-live, as in the example below. <configuration> <core> ... <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 ... </core> </configuration> 1 The global TTL for all connections is set to 30000 milliseconds. The default value is -1 , which allows clients to set their own TTL. 2 The interval between checks for dead connections is set to 1000 milliseconds. By default, the checks are done every 2000 milliseconds. 9.2. Disabling Asynchronous Connection Execution Most packets received on the broker side are executed on the remoting thread. These packets represent short-running operations and are always executed on the remoting thread for performance reasons. However, some packet types are executed using a thread pool instead of the remoting thread, which adds a little network latency. The packet types that use the thread pool are implemented within the Java classes listed below. The classes are all found in the package org.apache.actiinvemq.artemis.core.protocol.core.impl.wireformat . RollbackMessage SessionCloseMessage SessionCommitMessage SessionXACommitMessage SessionXAPrepareMessage SessionXARollbackMessage Procedure To disable asynchronous connection execution, add the async-connection-execution-enabled configuration element to <broker_instance_dir> /etc/broker.xml and set it to false , as in the example below. The default value is true . <configuration> <core> ... <async-connection-execution-enabled>false</async-connection-execution-enabled> ... </core> </configuration> Additional resources To learn how to configure the AMQ Core Protocol JMS client to detect dead connections, see Detecting dead connections in the AMQ Core Protocol JMS documentation. To learn how to configure a connection time-to-live in the AMQ Core Protocol JMS client, see Configuring time-to-live in the AMQ Core Protocol JMS documentation.
[ "[Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] I'm closing a JMS Conection you left open. Please make sure you close all connections explicitly before let ting them go out of scope! [Finalizer] 20:14:43,244 WARNING [org.apache.activemq.artemis.core.client.impl.DelegatingSession] The session you didn't close was created here: java.lang.Exception at org.apache.activemq.artemis.core.client.impl.DelegatingSession.<init>(DelegatingSession.java:83) at org.acme.yourproject.YourClass (YourClass.java:666) 1", "<configuration> <core> <connection-ttl-override>30000</connection-ttl-override> 1 <connection-ttl-check-interval>1000</connection-ttl-check-interval> 2 </core> </configuration>", "<configuration> <core> <async-connection-execution-enabled>false</async-connection-execution-enabled> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/dead_connections
Chapter 6. Additional Resources
Chapter 6. Additional Resources This chapter provides references to other relevant sources of information about Red Hat Software Collections 3.8 and Red Hat Enterprise Linux. 6.1. Red Hat Product Documentation The following documents are directly or indirectly relevant to this book: Red Hat Software Collections 3.8 Packaging Guide - The Packaging Guide for Red Hat Software Collections explains the concept of Software Collections, documents the scl utility, and provides a detailed explanation of how to create a custom Software Collection or extend an existing one. Red Hat Developer Toolset 12.1 Release Notes - The Release Notes for Red Hat Developer Toolset document known problems, possible issues, changes, and other important information about this Software Collection. Red Hat Developer Toolset 12 User Guide - The User Guide for Red Hat Developer Toolset contains more information about installing and using this Software Collection. Using Red Hat Software Collections Container Images - This book provides information on how to use container images based on Red Hat Software Collections. The available container images include applications, daemons, databases, as well as the Red Hat Developer Toolset container images. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. Getting Started with Containers - This guide contains a comprehensive overview of information about building and using container images on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. Using Red Hat Subscription Management - The Using Red Hat Subscription Management book provides detailed information on how to register Red Hat Enterprise Linux systems, manage subscriptions, and view notifications for the registered systems. Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 provides information on deployment, configuration, and administration of this system. 6.2. Red Hat Developers Red Hat Developer Program - The Red Hat Developers community portal. Overview of Red Hat Software Collections on Red Hat Developers - The Red Hat Developers portal provides a number of tutorials to get you started with developing code using different development technologies. This includes the Node.js, Perl, PHP, Python, and Ruby Software Collections. Red Hat Developer Blog - The Red Hat Developer Blog contains up-to-date information, best practices, opinion, product and program announcements as well as pointers to sample code and other resources for those who are designing and developing applications based on Red Hat technologies.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-additional_resources
8.130. opencryptoki
8.130. opencryptoki 8.130.1. RHBA-2013:1592 - opencryptoki bug fix and enhancement update Updated opencryptoki packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The opencryptoki packages contain version 2.11 of the PKCS#11 API, implemented for IBM Cryptocards. This package includes support for the IBM 4758 Cryptographic CoProcessor (with the PKCS#11 firmware loaded), the IBM eServer Cryptographic Accelerator (FC 4960 on IBM eServer System p), the IBM Crypto Express2 (FC 0863 or FC 0870 on IBM System z), and the IBM CP Assist for Cryptographic Function (FC 3863 on IBM System z). Note The opencryptoki package has been upgraded to upstream version 2.4.3.1, which, compared to the version, provides support for the SHA-2 hash algorithms in the ICA token and adds fixes for the SHA-2- based certificates in the CCA token. (BZ# 948349 ) Users of opencryptoki are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/opencryptoki
7.4. Starting the audit Service
7.4. Starting the audit Service Once auditd is properly configured, start the service to collect Audit information and store it in the log files. Execute the following command as the root user to start auditd : Optionally, you can configure auditd to start at boot time using the following command as the root user: A number of other actions can be performed on auditd using the service auditd action command, where action can be one of the following: stop - stops auditd . restart - restarts auditd . reload or force-reload - reloads the configuration of auditd from the /etc/audit/auditd.conf file. rotate - rotates the log files in the /var/log/audit/ directory. resume - resumes logging of Audit events after it has been previously suspended, for example, when there is not enough free space on the disk partition that holds the Audit log files. condrestart or try-restart - restarts auditd only if it is already running. status - displays the running status of auditd .
[ "~]# service auditd start", "~]# chkconfig auditd on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-starting_the_audit_service
probe::tty.init
probe::tty.init Name probe::tty.init - Called when a tty is being initalized Synopsis tty.init Values name the driver .dev_name name module the module name driver_name the driver name
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-init
Chapter 5. Profiling
Chapter 5. Profiling Developers profile programs to focus attention on the areas of the program that have the largest impact on performance. The types of data collected include what section of the program consumes the most processor time, and where memory is allocated. Profiling collects data from the actual program execution. Thus, the quality of the data collect is influenced by the actual tasks being performed by the program. The tasks performed during profiling should be representative of actual use; this ensures that problems arising from realistic use of the program are addressed during development. Red Hat Enterprise Linux 6 includes a number of different tools (Valgrind, OProfile, perf , and SystemTap) to collect profiling data. Each tool is suitable for performing specific types of profile runs, as described in the following sections. 5.1. Valgrind Valgrind is an instrumentation framework for building dynamic analysis tools that can be used to profile applications in detail. The default installation alrready provides five standard tools. Valgrind tools are generally used to investigate memory management and threading problems. The Valgrind suite also includes tools that allow the building of new profiling tools as required. Valgrind provides instrumentation for user-space binaries to check for errors, such as the use of uninitialized memory, improper allocation/freeing of memory, and improper arguments for systemcalls. Its profiling tools can be used by normal users on most binaries; however, compared to other profilers, Valgrind profile runs are significantly slower. To profile a binary, Valgrind rewrites its executable and instruments the rewritten binary. Valgrind 's tools are most useful for looking for memory-related issues in user-space programs; it is not suitable for debugging time-specific issues or kernel-space instrumentation and debugging. Valgrind reports are most useful and accurate whhen debuginfo packages are installed for the programs or libraries under investigation. See Section 4.2, "Installing Debuginfo Packages" . 5.1.1. Valgrind Tools The Valgrind suite is composed of the following tools: memcheck This tool detects memory management problems in programs by checking all reads from and writes to memory and intercepting all system calls to malloc , new , free , and delete . memcheck is perhaps the most used Valgrind tool, as memory management problems can be difficult to detect using other means. Such problems often remain undetected for long periods, eventually causing crashes that are difficult to diagnose. cachegrind cachegrind is a cache profiler that accurately pinpoints sources of cache misses in code by performing a detailed simulation of the I1, D1 and L2 caches in the CPU. It shows the number of cache misses, memory references, and instructions accruing to each line of source code; cachegrind also provides per-function, per-module, and whole-program summaries, and can even show counts for each individual machine instructions. callgrind Like cachegrind , callgrind can model cache behavior. However, the main purpose of callgrind is to record callgraphs data for the executed code. massif massif is a heap profiler; it measures how much heap memory a program uses, providing information on heap blocks, heap administration overheads, and stack sizes. Heap profilers are useful in finding ways to reduce heap memory usage. On systems that use virtual memory, programs with optimized heap memory usage are less likely to run out of memory, and may be faster as they require less paging. helgrind In programs that use the POSIX pthreads threading primitives, helgrind detects synchronization errors. Such errors are: Misuses of the POSIX pthreads API Potential deadlocks arising from lock ordering problems Data races (that is, accessing memory without adequate locking) Valgrind also allows you to develop your own profiling tools. In line with this, Valgrind includes the lackey tool, which is a sample that can be used as a template for generating your own tools. 5.1.2. Using Valgrind The valgrind package and its dependencies install all the necessary tools for performing a Valgrind profile run. To profile a program with Valgrind , use: See Section 5.1.1, "Valgrind Tools" for a list of arguments for toolname . In addition to the suite of Valgrind tools, none is also a valid argument for toolname ; this argument allows you to run a program under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind itself. You can also instruct Valgrind to send all of its information to a specific file. To do so, use the option --log-file= filename . For example, to check the memory usage of the executable file hello and send profile information to output , use: See Section 5.1.3, "Additional information" for more information on Valgrind , along with other available documentation on the Valgrind suite of tools. 5.1.3. Additional information For more extensive information on Valgrind , see man valgrind . Red Hat Enterprise Linux also provides a comprehensive Valgrind Documentation book available as PDF and HTML in: /usr/share/doc/valgrind- version /valgrind_manual.pdf /usr/share/doc/valgrind- version /html/index.html
[ "~]USD valgrind --tool= toolname program", "~]USD valgrind --tool=memcheck --log-file=output hello" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/profiling
Chapter 8. Networking
Chapter 8. Networking 8.1. Networking overview OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins. Virtual machines (VMs) are integrated with OpenShift Container Platform networking and its ecosystem. 8.1.1. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 8.1.2. Using the default pod network Connecting a virtual machine to the default pod network Each VM is connected by default to the default internal pod network. You can add or remove network interfaces by editing the VM specification. Exposing a virtual machine as a service You can expose a VM within the cluster or outside the cluster by creating a Service object. For on-premise clusters, you can configure a load balancing service by using the MetalLB Operator. You can install the MetalLB Operator by using the OpenShift Container Platform web console or the CLI. 8.1.3. Configuring VM secondary network interfaces Connecting a virtual machine to a Linux bridge network Install the Kubernetes NMState Operator to configure Linux bridges, VLANs, and bondings for your secondary networks. You can create a Linux bridge network and attach a VM to the network by performing the following steps: Configure a Linux bridge network device by creating a NodeNetworkConfigurationPolicy custom resource definition (CRD). Configure a Linux bridge network by creating a NetworkAttachmentDefinition CRD. Connect the VM to the Linux bridge network by including the network details in the VM configuration. Connecting a virtual machine to an SR-IOV network You can use Single Root I/O Virtualization (SR-IOV) network devices with additional networks on your OpenShift Container Platform cluster installed on bare metal or Red Hat OpenStack Platform (RHOSP) infrastructure for applications that require high bandwidth or low latency. You must install the SR-IOV Network Operator on your cluster to manage SR-IOV network devices and network attachments. You can connect a VM to an SR-IOV network by performing the following steps: Configure an SR-IOV network device by creating a SriovNetworkNodePolicy CRD. Configure an SR-IOV network by creating an SriovNetwork object. Connect the VM to the SR-IOV network by including the network details in the VM configuration. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a VM to an Open Virtual Network (OVN)-Kubernetes secondary network. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Configure an OVN-Kubernetes secondary network by creating a NetworkAttachmentDefinition CRD. Connect the VM to the OVN-Kubernetes secondary network by adding the network details to the VM specification. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your VM. OpenShift Virtualization supports hot plugging and hot unplugging for Linux bridge interfaces that use the VirtIO device driver. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and VMs to run DPDK workloads over SR-IOV networks. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. Accessing a virtual machine by using the cluster FQDN You can access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). Configuring and viewing IP addresses You can configure an IP address of a secondary network interface when you create a VM. The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.1.4. Integrating with OpenShift Service Mesh Connecting a virtual machine to a service mesh OpenShift Virtualization is integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods and virtual machines. 8.1.5. Managing MAC address pools Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for VM network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. 8.1.6. Configuring SSH access Configuring SSH access to virtual machines You can configure SSH access to VMs by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a VM to the secondary network interface, and connect to its allocated IP address. 8.2. Connecting a virtual machine to the default pod network You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode. Note Traffic passing through network interfaces to the default pod network is interrupted during live migration. 8.2.1. Configuring masquerade mode from the command line You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file. Prerequisites The virtual machine must be configured to use DHCP to acquire IPv4 addresses. Procedure Edit the interfaces spec of your virtual machine configuration file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 # ... networks: - name: default pod: {} 1 Connect using masquerade mode. 2 Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80 . Note Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped. Create the virtual machine: USD oc create -f <vm-name>.yaml 8.2.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6) You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120 . You can edit this value based on your network requirements. When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine. Prerequisites The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack. Procedure In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 # ... networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4 1 Connect using masquerade mode. 2 Allows incoming traffic on port 80 to the virtual machine. 3 The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120 . 4 The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1 . Create the virtual machine in the namespace: USD oc create -f example-vm-ipv6.yaml Verification To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address: USD oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}" 8.2.3. About jumbo frames support When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes. The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways: libvirt : If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device. DHCP: If the guest DHCP client can read the MTU value from the DHCP server response. Note For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value. 8.2.4. Additional resources Changing the MTU for the cluster network Optimizing the MTU for your network 8.3. Exposing a virtual machine by using a service You can expose a virtual machine within the cluster or outside the cluster by creating a Service object. 8.3.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. Additional resources Installing the MetalLB Operator Configuring services to use MetalLB 8.3.2. Dual-stack support If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object. The spec.ipFamilyPolicy field can be set to one of the following values: SingleStack The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. PreferDualStack The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. RequireDualStack This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack . The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges. You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values: [IPv4] [IPv6] [IPv4, IPv6] [IPv6, IPv4] 8.3.3. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 8.3.4. Additional resources Configuring ingress cluster traffic using a NodePort Configuring ingress cluster traffic using a load balancer 8.4. Connecting a virtual machine to a Linux bridge network By default, OpenShift Virtualization is installed with a single, internal pod network. You can create a Linux bridge network and attach a virtual machine (VM) to the network by performing the following steps: Create a Linux bridge node network configuration policy (NNCP) . Create a Linux bridge network attachment definition (NAD) by using the web console or the command line . Configure the VM to recognize the NAD by using the web console or the command line . Note OpenShift Virtualization does not support Linux bridge bonding modes 0, 5, and 6. For more information, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . 8.4.1. Creating a Linux bridge NNCP You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 8.4.2. Creating a Linux bridge NAD You can create a Linux bridge network attachment definition (NAD) by using the OpenShift Container Platform web console or command line. 8.4.2.1. Creating a Linux bridge NAD by using the web console You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console. A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Procedure In the web console, click Networking NetworkAttachmentDefinitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Select CNV Linux bridge from the Network Type list. Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . 8.4.2.2. Creating a Linux bridge NAD by using the command line You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line. The NAD and the VM must be in the same namespace. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Prerequisites The node must support nftables and the nft binary must be deployed to enable MAC spoof check. Procedure Add the VM to the NetworkAttachmentDefinition configuration, as in the following example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { "cniVersion": "0.3.1", "name": "bridge-network", 3 "type": "bridge", 4 "bridge": "br1", 5 "macspoofchk": false, 6 "vlan": 100, 7 "preserveDefaultVlan": false 8 } 1 The name for the NetworkAttachmentDefinition object. 2 Optional: Annotation key-value pair for node selection for the bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the defined bridge connected. 3 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 4 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI. 5 The name of the Linux bridge configured on the node. The name should match the interface bridge name defined in the NodeNetworkConfigurationPolicy manifest. 6 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 7 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 8 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Note A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Create the network attachment definition: USD oc create -f network-attachment-definition.yaml 1 1 Where network-attachment-definition.yaml is the file name of the network attachment definition manifest. Verification Verify that the network attachment definition was created by running the following command: USD oc get network-attachment-definition bridge-network 8.4.3. Configuring a VM network interface You can configure a virtual machine (VM) network interface by using the OpenShift Container Platform web console or command line. 8.4.3.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. Networking fields Name Description Name Name for the network interface controller. Model Indicates the model of the network interface controller. Supported values are e1000e and virtio . Network List of available network attachment definitions. Type List of available binding methods. Select the binding method suitable for the network interface: Default pod network: masquerade Linux bridge network: bridge SR-IOV network: SR-IOV MAC Address MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. 8.4.3.2. Configuring a VM network interface by using the command line You can configure a virtual machine (VM) network interface for a bridge network by using the command line. Prerequisites Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect. Procedure Add the bridge interface and the network attachment definition to the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - masquerade: {} name: default - bridge: {} name: bridge-net 1 # ... networks: - name: default pod: {} - name: bridge-net 2 multus: networkName: a-bridge-network 3 1 The name of the bridge interface. 2 The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry. 3 The name of the network attachment definition. Apply the configuration: USD oc apply -f example-vm.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.5. Connecting a virtual machine to an SR-IOV network You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps: Configuring an SR-IOV network device Configuring an SR-IOV network Connecting the VM to the SR-IOV network 8.5.1. Configuring SR-IOV network devices The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR). Note When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes. It might take several minutes for a configuration change to apply. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. You have enough available nodes in your cluster to handle the evicted workload from drained nodes. You have not selected any control plane nodes for SR-IOV network device configuration. Procedure Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14 1 Specify a name for the CR object. 2 Specify the namespace where the SR-IOV Operator is installed. 3 Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name. 4 Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. 5 Optional: Specify an integer value between 0 and 99 . A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99 . The default value is 99 . 6 Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. 7 Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127 . 8 The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices , you must also specify a value for vendor , deviceID , or pfNames . If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device. 9 Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3 . 10 Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b , 1015 , 1017 . 11 Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device. 12 The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1 . 13 The vfio-pci driver type is required for virtual functions in OpenShift Virtualization. 14 Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false . The default value is false . Note If isRDMA flag is set to true , you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the SriovNetworkNodePolicy object: USD oc create -f <name>-sriov-node-network.yaml where <name> specifies the name for this configuration. After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured. USD oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}' 8.5.2. Configuring SR-IOV additional network You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Note Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12 1 Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name. 2 Specify the namespace where the SR-IOV Network Operator is installed. 3 Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network. 4 Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. 5 Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095 . The default value is 0 . 6 Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 7 Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable , disable and auto . 8 Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF. 9 Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate. Note Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847 . 10 Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0 . 11 Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off" . Important You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator. 12 Optional: Replace <capabilities> with the capabilities to configure for this network. To create the object, enter the following command. Replace <name> with a name for this additional network. USD oc create -f <name>-sriov-network.yaml Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object. USD oc get net-attach-def -n <namespace> 8.5.3. Connecting a virtual machine to an SR-IOV network You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. Procedure Add the SR-IOV network details to the spec.domain.devices.interfaces and spec.networks stanzas of the VM configuration as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: default masquerade: {} - name: nic1 1 sriov: {} networks: - name: default pod: {} - name: nic1 2 multus: networkName: sriov-network 3 # ... 1 Specify a unique name for the SR-IOV interface. 2 Specify the name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier. 3 Specify the name of the SR-IOV network attachment definition. Apply the virtual machine configuration: USD oc apply -f <vm_sriov>.yaml 1 1 The name of the virtual machine YAML file. 8.5.4. Additional resources Configuring DPDK workloads for improved performance 8.6. Using DPDK with SR-IOV The Data Plane Development Kit (DPDK) provides a set of libraries and drivers for fast packet processing. You can configure clusters and virtual machines (VMs) to run DPDK workloads over SR-IOV networks. Important Running DPDK workloads is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.6.1. Configuring a cluster for DPDK workloads You can configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance. Prerequisites You have access to the cluster as a user with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed the SR-IOV Network Operator. You have installed the Node Tuning Operator. Procedure Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS). Label a subset of the compute nodes with a custom role; for example, worker-dpdk : USD oc label node <node_name> node-role.kubernetes.io/worker-dpdk="" Create a new MachineConfigPool manifest that contains the worker-dpdk label in the spec.machineConfigSelector object: Example MachineConfigPool manifest apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: "" Create a PerformanceProfile manifest that applies to the labeled nodes and the machine config pool that you created in the steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping. Example PerformanceProfile manifest apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: "" numa: topologyPolicy: single-numa-node Note The compute nodes automatically restart after you apply the MachineConfigPool and PerformanceProfile manifests. Retrieve the name of the generated RuntimeClass resource from the status.runtimeClass field of the PerformanceProfile object: USD oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}' Set the previously obtained RuntimeClass name as the default container runtime class for the virt-launcher pods by editing the HyperConverged custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type='json' -p='[{"op": "add", "path": "/spec/defaultRuntimeClass", "value":"<runtimeclass-name>"}]' Note Editing the HyperConverged CR changes a global setting that affects all VMs that are created after the change is applied. Create an SriovNetworkNodePolicy object with the spec.deviceType field set to vfio-pci : Example SriovNetworkNodePolicy manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: "8086" deviceID: "1572" pfNames: - eno3 rootDevices: - "0000:19:00.2" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Additional resources Using CPU Manager and Topology Manager Configuring huge pages Creating a custom machine config pool 8.6.2. Configuring a project for DPDK workloads You can configure the project to run DPDK workloads on SR-IOV hardware. Prerequisites Your cluster is configured to run DPDK workloads. Procedure Create a namespace for your DPDK applications: USD oc create ns dpdk-checkup-ns Create an SriovNetwork object that references the SriovNetworkNodePolicy object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object. Example SriovNetwork manifest apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [{ "dst": "0.0.0.0/0" }], "gateway": "10.56.217.1" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: "off" trust: "on" vlan: 1019 1 The namespace where the NetworkAttachmentDefinition object is deployed. 2 The value of the spec.resourceName attribute of the SriovNetworkNodePolicy object that was created when configuring the cluster for DPDK workloads. Optional: Run the virtual machine latency checkup to verify that the network is properly configured. Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads. Additional resources Working with projects Virtual machine latency checkup DPDK checkup 8.6.3. Configuring a virtual machine for DPDK workloads You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing. Prerequisites Your cluster is configured to run DPDK workloads. You have created and configured the project in which the VM will run. Procedure Edit the VirtualMachine manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages: Example VirtualMachine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east # ... 1 This annotation specifies that load balancing is disabled for CPUs that are used by the container. 2 This annotation specifies that the CPU quota is disabled for CPUs that are used by the container. 3 This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container. 4 The number of sockets inside the VM. This field must be set to 1 for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node. 5 The number of cores inside the VM. This must be a value greater than or equal to 1 . In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs. 6 The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi. 7 The name of the SR-IOV NetworkAttachmentDefinition object. Save and exit the editor. Apply the VirtualMachine manifest: USD oc apply -f <file_name>.yaml Configure the guest operating system. The following example shows the configuration steps for RHEL 8 OS: Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified. USD grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8" To achieve low-latency tuning by using the cpu-partitioning profile in the TuneD application, run the following commands: USD dnf install -y tuned-profiles-cpu-partitioning USD echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application. USD tuned-adm profile cpu-partitioning Override the SR-IOV NIC driver by using the driverctl device driver control utility: USD dnf install -y driverctl USD driverctl set-override 0000:07:00.0 vfio-pci Restart the VM to apply the changes. 8.7. Connecting a virtual machine to an OVN-Kubernetes secondary network You can connect a virtual machine (VM) to an Open Virtual Network (OVN)-Kubernetes secondary network. The OVN-Kubernetes Container Network Interface (CNI) plug-in uses the Geneve (Generic Network Virtualization Encapsulation) protocol to create an overlay network between nodes. OpenShift Virtualization currently supports the flat layer 2 topology. This topology connects workloads by a cluster-wide logical switch. You can use this overlay network to connect VMs on different nodes, without having to configure any additional physical networking infrastructure. To configure an OVN-Kubernetes secondary network and attach a VM to that network, perform the following steps: Create a network attachment definition (NAD) by using the web console or the CLI . Add information about the secondary network interface to the VM specification by using the web console or the CLI . 8.7.1. Creating an OVN-Kubernetes NAD You can create an OVN-Kubernetes flat layer 2 network attachment definition (NAD) by using the OpenShift Container Platform web console or the CLI. Note Configuring IP address management (IPAM) by specifying the spec.config.ipam.subnet attribute in a network attachment definition for virtual machines is not supported. 8.7.1.1. Creating a NAD for flat layer 2 topology using the CLI You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Create a NetworkAttachmentDefinition object: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |2 { "cniVersion": "0.3.1", 1 "name": "my-namespace-l2-network", 2 "type": "ovn-k8s-cni-overlay", 3 "topology":"layer2", 4 "mtu": 1300, 5 "netAttachDefName": "my-namespace/l2-network" 6 } 1 The CNI specification version. The required value is 0.3.1 . 2 The name of the network. This attribute is not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition objects that exist in two different namespaces. This feature is useful to connect VMs in different namespaces. 3 The name of the CNI plug-in to be configured. The required value is ovn-k8s-cni-overlay . 4 The topological configuration for the network. The required value is layer2 . 5 Optional: The maximum transmission unit (MTU) value. The default value is automatically set by the kernel. 6 The value of the namespace and name fields in the metadata stanza of the NetworkAttachmentDefinition object. Note The above example configures a cluster-wide overlay without a subnet defined. This means that the logical switch implementing the network only provides layer 2 communication. You must configure an IP address when you create the virtual machine by either setting a static IP address or by deploying a DHCP server on the network for a dynamic IP address. Apply the manifest: USD oc apply -f <filename>.yaml 8.7.2. Attaching a virtual machine to the OVN-Kubernetes secondary network You can attach a virtual machine (VM) to the OVN-Kubernetes secondary network interface by using the OpenShift Container Platform web console or the CLI. 8.7.2.1. Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration. Prerequisites You have access to the cluster as a user with cluster-admin privileges. You have installed the OpenShift CLI ( oc ). Procedure Edit the VirtualMachine manifest to add the OVN-Kubernetes secondary network interface details, as in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: default masquerade: {} - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: default pod: {} - name: secondary 2 multus: networkName: l2-network 3 # ... 1 The name of the OVN-Kubernetes secondary interface. 2 The name of the network. This must match the value of the spec.template.spec.domain.devices.interfaces.name field. 3 The name of the NetworkAttachmentDefinition object. Apply the VirtualMachine manifest: USD oc apply -f <filename>.yaml Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 8.7.3. Additional resources Configuration for an OVN-Kubernetes additional network 8.8. Hot plugging secondary network interfaces You can add or remove secondary network interfaces without stopping your virtual machine (VM). OpenShift Virtualization supports hot plugging and hot unplugging for Linux bridge interfaces that use the VirtIO device driver. Important Hot plugging and hot unplugging bridge network interfaces is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.8.1. VirtIO limitations Each VirtIO interface uses one of the limited Peripheral Connect Interface (PCI) slots in the VM. There are a total of 32 slots available. The PCI slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand. OpenShift Virtualization reserves up to four slots for hot plugging interfaces. This includes any existing plugged network interfaces. For example, if your VM has two existing plugged interfaces, you can hot plug two more network interfaces. Note The actual number of slots available for hot plugging also depends on the machine type. For example, the default PCI topology for the q35 machine type supports hot plugging one additional PCIe device. For more information on PCI topology and hot plug support, see the libvirt documentation . If you restart the VM after hot plugging an interface, that interface becomes part of the standard network interfaces. 8.8.2. Hot plugging a bridge network interface using the CLI Hot plug a bridge network interface to a virtual machine (VM) while the VM is running. Prerequisites A network attachment definition is configured in the same namespace as your VM. You have installed the virtctl tool. Procedure If the VM to which you want to hot plug the network interface is not running, start it by using the following command: USD virtctl start <vm_name> -n <namespace> Use the following command to hot plug a new network interface to the running VM. The virtctl addinterface command adds the new network interface to the VM and virtual machine instance (VMI) specification but does not attach it to the running VM. USD virtctl addinterface <vm_name> --network-attachment-definition-name <net_attach_dev_namespace>/<net_attach_def_name> --name <interface_name> where: <vm_name> The name of the VirtualMachine object. <net_attach_def_name> The name of the NetworkAttachmentDefinition object. <net_attach_dev_namespace> An identifier for the namespace associated with the NetworkAttachmentDefinition object. The supported values are default or the name of the namespace where the VM is located. <interface_name> The name of the new network interface. To attach the network interface to the running VM, live migrate the VM by using the following command: USD virtctl migrate <vm_name> Verification Verify that the VM live migration is successful by using the following command: USD oc get VirtualMachineInstanceMigration -w Example output NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora Verify that the new interface is added to the VM by checking the VMI status: USD oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" Example output [ { "infoSource": "domain, guest-agent", "interfaceName": "eth0", "ipAddress": "10.130.0.195", "ipAddresses": [ "10.130.0.195", "fd02:0:0:3::43c" ], "mac": "52:54:00:0e:ab:25", "name": "default", "queueCount": 1 }, { "infoSource": "domain, guest-agent, multus-status", "interfaceName": "eth1", "mac": "02:d8:b8:00:00:2a", "name": "bridge-interface", 1 "queueCount": 1 } ] 1 The hot plugged interface appears in the VMI status. 8.8.3. Hot unplugging a bridge network interface using the CLI You can remove a bridge network interface from a running virtual machine (VM). Prerequisites Your VM must be running. The VM must be created on a cluster running OpenShift Virtualization 4.14 or later. The VM must have a bridge network interface attached. Procedure Hot unplug a bridge network interface by running the following command. The virtctl removeinterface command detaches the network interface from the guest, but the interface still exists in the pod. USD virtctl removeinterface <vm_name> --name <interface_name> Remove the interface from the pod by migrating the VM: USD virtctl migrate <vm_name> 8.8.4. Additional resources Installing virtctl Creating a Linux bridge network attachment definition 8.9. Connecting a virtual machine to a service mesh OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4. 8.9.1. Adding a virtual machine to a service mesh To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true . Then expose your VM as a service to view your application in the mesh. Important To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090. Prerequisites You installed the Service Mesh Operators. You created the Service Mesh control plane. You added the VM project to the Service Mesh member roll. Procedure Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation: Example configuration file apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: "true" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk 1 The key/value pair (label) that must be matched to the service selector attribute. 2 The annotation to enable automatic sidecar injection. 3 The binding method (masquerade mode) for use with the default pod network. Apply the VM configuration: USD oc apply -f <vm_name>.yaml 1 1 The name of the virtual machine YAML file. Create a Service object to expose your VM to the service mesh. apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP 1 The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio . Create the service: USD oc create -f <service_name>.yaml 1 1 The name of the service YAML file. 8.9.2. Additional resources Installing the Service Mesh Operators Creating the Service Mesh control plane Adding projects to the Service Mesh member roll 8.10. Configuring a dedicated network for live migration You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 8.10.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 3 "mode": "bridge", "ipam": { "type": "whereabouts", 4 "range": "10.200.5.0/24" 5 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 3 Specify the name of the NIC to be used for live migration. 4 Specify the name of the CNI plugin that provides the network for the NAD. 5 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 8.10.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the OpenShift Container Platform web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the OpenShift Container Platform web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 8.10.3. Additional resources Configuring live migration limits and timeouts 8.11. Configuring and viewing IP addresses You can configure an IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.11.1. Configuring IP addresses for virtual machines You can configure a static IP address when you create a virtual machine (VM) by using the web console or the command line. You can configure a dynamic IP address when you create a VM by using the command line. The IP address is provisioned with cloud-init. 8.11.1.1. Configuring an IP address when creating a virtual machine by using the command line You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. Note If the VM is connected to the pod network, the pod network interface is the default route unless you update it. Prerequisites The virtual machine is connected to a secondary network. You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine. Procedure Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration: To configure a dynamic IP address, specify the interface name and enable DHCP: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 1 Specify the interface name. To configure a static IP, specify the interface name and the IP address: kind: VirtualMachine spec: # ... template: # ... spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2 1 Specify the interface name. 2 Specify the static IP address. 8.11.2. Viewing IP addresses of virtual machines You can view the IP address of a VM by using the OpenShift Container Platform web console or the command line. The network information is collected by the QEMU guest agent. 8.11.2.1. Viewing the IP address of a virtual machine by using the web console You can view the IP address of a virtual machine (VM) by using the OpenShift Container Platform web console. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a VM to open the VirtualMachine details page. Click the Details tab to view the IP address. 8.11.2.2. Viewing the IP address of a virtual machine by using the command line You can view the IP address of a virtual machine (VM) by using the command line. Note You must install the QEMU guest agent on a VM to view the IP address of a secondary network interface. A pod network interface does not require the QEMU guest agent. Procedure Obtain the virtual machine instance configuration by running the following command: USD oc describe vmi <vmi_name> Example output # ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa 8.11.3. Additional resources Installing the QEMU guest agent 8.12. Accessing a virtual machine by using the cluster FQDN You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using the fully qualified domain name (FQDN) of the cluster. Important Accessing VMs by using the cluster FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.12.1. Configuring a DNS server for secondary networks The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the deployKubeSecondaryDNS feature gate in the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You configured a load balancer for the cluster. You logged in to the cluster with cluster-admin permissions. Procedure Create a load balancer service to expose the DNS server outside the cluster by running the oc expose command according to the following example: USD oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb \ --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP' Retrieve the external IP address by running the following command: USD oc get service -n openshift-cnv Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Enable the DNS server and monitoring components according to the following example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: "10.46.41.94" 1 # ... 1 Specify the external IP address exposed by the load balancer service. Save the file and exit the editor. Retrieve the cluster FQDN by running the following command: USD oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}' Example output openshift.example.com Point to the DNS server by using one of the following methods: Add the kubeSecondaryDNSNameServerIP value to the resolv.conf file on your local machine. Note Editing the resolv.conf file overwrites existing DNS settings. Add the kubeSecondaryDNSNameServerIP value and the cluster FQDN to the enterprise DNS server records. For example: vm.<FQDN>. IN NS ns.vm.<FQDN>. ns.vm.<FQDN>. IN A 10.46.41.94 8.12.2. Connecting to a VM on a secondary network by using the cluster FQDN You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster. Prerequisites You installed the QEMU guest agent on the VM. The IP address of the VM is public. You configured the DNS server for secondary networks. You retrieved the fully qualified domain name (FQDN) of the cluster. Procedure Retrieve the network interface name from the VM configuration by running the following command: USD oc get vm -n <namespace> <vm_name> -o yaml Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic # ... networks: - multus: networkName: bridge-conf name: example-nic 1 1 Note the name of the network interface. Connect to the VM by using the ssh command: USD ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn> 8.12.3. Additional resources Configuring ingress cluster traffic using a load balancer Load balancing with MetalLB Configuring IP addresses for virtual machines 8.13. Managing MAC address pools for network interfaces The KubeMacPool component allocates MAC addresses for virtual machine (VM) network interfaces from a shared MAC address pool. This ensures that each network interface is assigned a unique MAC address. A virtual machine instance created from that VM retains the assigned MAC address across reboots. Note KubeMacPool does not handle virtual machine instances created independently from a virtual machine. 8.13.1. Managing KubeMacPool by using the command line You can disable and re-enable KubeMacPool by using the command line. KubeMacPool is enabled by default. Procedure To disable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore To re-enable KubeMacPool in two namespaces, run the following command: USD oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
[ "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"preserveDefaultVlan\": false 8 }", "oc create -f network-attachment-definition.yaml 1", "oc get network-attachment-definition bridge-network", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - masquerade: {} name: default - bridge: {} name: bridge-net 1 networks: - name: default pod: {} - name: bridge-net 2 multus: networkName: a-bridge-network 3", "oc apply -f example-vm.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: default masquerade: {} - name: nic1 1 sriov: {} networks: - name: default pod: {} - name: nic1 2 multus: networkName: sriov-network 3", "oc apply -f <vm_sriov>.yaml 1", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node", "oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc create ns dpdk-checkup-ns", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east", "oc apply -f <file_name>.yaml", "grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"", "dnf install -y tuned-profiles-cpu-partitioning", "echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf", "tuned-adm profile cpu-partitioning", "dnf install -y driverctl", "driverctl set-override 0000:07:00.0 vfio-pci", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |2 { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: default masquerade: {} - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: default pod: {} - name: secondary 2 multus: networkName: l2-network 3", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "virtctl addinterface <vm_name> --network-attachment-definition-name <net_attach_dev_namespace>/<net_attach_def_name> --name <interface_name>", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "virtctl removeinterface <vm_name> --name <interface_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 3 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 4 \"range\": \"10.200.5.0/24\" 5 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'", "oc get service -n openshift-cnv", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1", "oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'", "openshift.example.com", "vm.<FQDN>. IN NS ns.vm.<FQDN>.", "ns.vm.<FQDN>. IN A 10.46.41.94", "oc get vm -n <namespace> <vm_name> -o yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1", "ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/networking
Chapter 4. How to use encrypted property placeholders in Spring Boot
Chapter 4. How to use encrypted property placeholders in Spring Boot When securing a container it is not recommended to use the plain text passwords in configuration files. One way to avoid using plain text passwords is to use encrypted property placeholders whenever possible. 4.1. About the master password for encrypting values To use Jasypt to encrypt a value, a master password is required. It is up to you or an administrator to choose the master password. Jasypt provides several ways to set the master password. Jasypt can be integrated into the Spring configuration framework so that property values are decrypted as the configuration file is loaded. One way is to specify the master password in plain text in a Spring boot configuration. Spring uses the PropertyPlaceholder framework to replace tokens with values from a properties file, and Jasypt's approach replaces the PropertyPlaceholderConfigurer class with one that recognizes encrypted strings and decrypts them. Example <bean id="propertyPlaceholderConfigurer" class="org.jasypt.spring.properties.EncryptablePropertyPlaceholderConfigurer"> <constructor-arg ref="configurationEncryptor" /> <property name="location" value="/WEB-INF/application.properties" /> </bean> <bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor"> <property name="config" ref="environmentVariablesConfiguration" /> </bean> <bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig"> <property name="algorithm" value="PBEWithMD5AndDES" /> <property name="password" value="myPassword" /> </bean> Instead of specifying the master password in plain text, you can use an environment variable to set your master password. In the Spring Boot configuration file, specify this environment variable as the value of the passwordEnvName property. For example, if you set the MASTER_PW environment variable to your master password, then you would have this entry in your Spring Boot configuration file: 4.2. Using Encrypted Property Placeholders in Spring Boot By using Jasypt, you can provide encryption for the property sources and the application can decrypt the encrypted properties and retrieve the original values. Following procedure explains how to encrypt and decrypt the property sources in Spring Boot. Procedure Add jasypt dependency to your project's pom.xml file. <dependency> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-spring-boot-starter</artifactId> <version>3.0.3</version> </dependency> Add Maven repository to your project's pom.xml. <repository> <id>jasypt-basic</id> <name>Jasypt Repository</name> <url>https://repo1.maven.org/maven2/</url> </repository> Add the Jasypt Maven plugin to your project as well as it allows you to use the Maven commands for encryption and decryption. <plugin> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-maven-plugin</artifactId> <version>3.0.3</version> </plugin> Add the plugin repository to pom.xml . <pluginRepository> <id>jasypt-basic</id> <name>Jasypt Repository</name> <url>https://repo1.maven.org/maven2/</url> </pluginRepository> To encrypt the username and password listed in the application.properties file, wrap these values inside DEC() as shown below. Run the following command to encrypt the username and password. This replaces the DEC() placeholders in the application.properties file with the encrypted value, for example, To decrypt the credentials in the Spring application configuration file, run following command. This prints out the content of the application.properties file as it was before the encryption. However, this does not update the configuration file.
[ "<bean id=\"propertyPlaceholderConfigurer\" class=\"org.jasypt.spring.properties.EncryptablePropertyPlaceholderConfigurer\"> <constructor-arg ref=\"configurationEncryptor\" /> <property name=\"location\" value=\"/WEB-INF/application.properties\" /> </bean> <bean id=\"configurationEncryptor\" class=\"org.jasypt.encryption.pbe.StandardPBEStringEncryptor\"> <property name=\"config\" ref=\"environmentVariablesConfiguration\" /> </bean> <bean id=\"environmentVariablesConfiguration\" class=\"org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig\"> <property name=\"algorithm\" value=\"PBEWithMD5AndDES\" /> <property name=\"password\" value=\"myPassword\" /> </bean>", "<property name=\"passwordEnvName\" value=\"MASTER_PW\">", "<dependency> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-spring-boot-starter</artifactId> <version>3.0.3</version> </dependency>", "<repository> <id>jasypt-basic</id> <name>Jasypt Repository</name> <url>https://repo1.maven.org/maven2/</url> </repository>", "<plugin> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-maven-plugin</artifactId> <version>3.0.3</version> </plugin>", "<pluginRepository> <id>jasypt-basic</id> <name>Jasypt Repository</name> <url>https://repo1.maven.org/maven2/</url> </pluginRepository>", "spring.datasource.username=DEC(root) spring.datasource.password=DEC(Password@1)", "mvn jasypt:encrypt -Djasypt.encryptor.password=mypassword", "spring.datasource.username=ENC(3UtB1NhSZdVXN9xQBwkT0Gn+UxR832XP+tOOfFTlNL57FiMM7BWPRTeychVtLLhB) spring.datasource.password=ENC(4ErqElyCHjjFnqPOCZNAaTdRC7u7yJSy16UsHtVkwPIr+3zLyabNmQwwpFo7F7LU)", "mvn jasypt:decrypt -Djasypt.encryptor.password=mypassword" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_spring_boot/how-to-use-encrypted-property-placeholders-sping-boot
Chapter 9. Running in Cloud Environments
Chapter 9. Running in Cloud Environments 9.1. Run Red Hat JBoss Data Virtualization in an Amazon AWS Cloud Instance Procedure 9.1. Running Red Hat JBoss Data Virtualization in an Amazon Cloud Open ports by updating the security group. (At a minimum, you will need to open the TCP, HTTP and SSH ports.) To start the server, add the following parameters to bind the management and host ports: -Djboss.bind.address.management=0.0.0.0 and -b 0.0.0.0 Note -b is a shortcut for -Djboss.bind.address Here is an example: ./standalone.sh -Djboss.bind.address.management=0.0.0.0 -b 0.0.0.0 To access the AWS instance from Teiid Designer, go to the JBDS preferences and select General -> Network Connections SSH2 . , under the Key Management tab, click Load Existing Key to add the key generated by Amazon. To create a server connection, on the Server Configuration Overview Panel , under Server Behavior , select Remote System Deployment . Also ensure you check Server is externally managed... Click the New Host button, select the SSH Only option and click . Set the Host Name to match the Amazon public IP address and make the connection name the same. Click Finish . Open the Remote Systems tab. Right mouse click the new connection and click connect . Fill in the User ID . (You do not need to provide a password if your SSH key is configured.) Go back to the server configuration overview panel and confirm that the Host drop-down has selected the new host that you have created. Start the server. (This switches the state of the server you already started.)
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/chap-running_in_cloud_environments
function::task_ns_gid
function::task_ns_gid Name function::task_ns_gid - The group identifier of the task as seen in a namespace Synopsis Arguments task task_struct pointer Description This function returns the group id of the given task as seen in in the given user namespace.
[ "task_ns_gid:long(task:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-ns-gid
8.5. Using OpenSCAP with Red Hat Satellite
8.5. Using OpenSCAP with Red Hat Satellite When running multiple Red Hat Enterprise Linux systems, it is important to keep all your systems compliant with your security policy and perform security scans and evaluations remotely from one location. This can be achieved by using Red Hat Satellite 5.5 or later with the spacewalk-oscap package installed on your Satellite client. The package is available from the Red Hat Network Tools channel. This solution supports two methods of performing security compliance scans, viewing and further processing of the scan results. You can either use the OpenSCAP Satellite Web Interface or run commands and scripts from the Satellite API . For more information about this solution to security compliance, its requirements and capabilities, see the Red Hat Satellite documentation .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-using_openscap_with_red_hat_satellite
Chapter 87. Decision engine event listeners and debug logging
Chapter 87. Decision engine event listeners and debug logging The decision engine generates events when performing activities such as fact insertions and rule executions. If you register event listeners, the decision engine calls every listener when an activity is performed. Event listeners have methods that correspond to different types of activities. The decision engine passes an event object to each method; this object contains information about the specific activity. Your code can implement custom event listeners and you can also add and remove registered event listeners. In this way, your code can be notified of decision engine activity, and you can separate logging and auditing work from the core of your application. The decision engine supports the following event listeners with the following methods: Agenda event listener public interface AgendaEventListener extends EventListener { void matchCreated(MatchCreatedEvent event); void matchCancelled(MatchCancelledEvent event); void beforeMatchFired(BeforeMatchFiredEvent event); void afterMatchFired(AfterMatchFiredEvent event); void agendaGroupPopped(AgendaGroupPoppedEvent event); void agendaGroupPushed(AgendaGroupPushedEvent event); void beforeRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void afterRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void beforeRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); void afterRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); } Rule runtime event listener public interface RuleRuntimeEventListener extends EventListener { void objectInserted(ObjectInsertedEvent event); void objectUpdated(ObjectUpdatedEvent event); void objectDeleted(ObjectDeletedEvent event); } For the definitions of event classes, see the GitHub repository . Red Hat Decision Manager includes default implementations of these listeners: DefaultAgendaEventListener and DefaultRuleRuntimeEventListener . You can extend each of these implementations to monitor specific events. For example, the following code extends DefaultAgendaEventListener to monitor the AfterMatchFiredEvent event and attaches this listener to a KIE session. The code prints pattern matches when rules are executed (fired): Example code to monitor and print AfterMatchFiredEvent events in the agenda ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } }); Red Hat Decision Manager also includes the following decision engine agenda and rule runtime event listeners for debug logging: DebugAgendaEventListener DebugRuleRuntimeEventListener These event listeners implement the same supported event-listener methods and include a debug print statement by default. You can add additional monitoring code for a specific supported event. For example, the following code uses the DebugRuleRuntimeEventListener event listener to monitor and print all working memory (rule runtime) events: Example code to monitor and print all working memory events ksession.addEventListener( new DebugRuleRuntimeEventListener() ); 87.1. Practices for development of event listeners The decision engine calls event listeners during rule processing. The calls block the execution of the decision engine. Therefore, the event listener can affect the performance of the decision engine. To ensure minimal disruption, follow the following guidelines: Any action must be as short as possible. A listener class must not have a state. The decision engine can destroy and re-create a listener class at any time. Do not use logic that relies on the order of execution of different event listeners. Do not include interactions with different entities outside the decision engine within a listener. For example, do not include REST calls for notification of events. An exception is the output of logging information; however, a logging listener must be as simple as possible. You can use a listener to modify the state of the decision engine, for example, to change the values of variables.
[ "public interface AgendaEventListener extends EventListener { void matchCreated(MatchCreatedEvent event); void matchCancelled(MatchCancelledEvent event); void beforeMatchFired(BeforeMatchFiredEvent event); void afterMatchFired(AfterMatchFiredEvent event); void agendaGroupPopped(AgendaGroupPoppedEvent event); void agendaGroupPushed(AgendaGroupPushedEvent event); void beforeRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void afterRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void beforeRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); void afterRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); }", "public interface RuleRuntimeEventListener extends EventListener { void objectInserted(ObjectInsertedEvent event); void objectUpdated(ObjectUpdatedEvent event); void objectDeleted(ObjectDeletedEvent event); }", "ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } });", "ksession.addEventListener( new DebugRuleRuntimeEventListener() );" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/engine-event-listeners-con_decision-engine
Preface
Preface Red Hat OpenShift Data Foundation 4.9 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Red Hat Virtualization platform clusters. Deploying OpenShift Data Foundation on OpenShift Container Platform using shared storage devices provided by Red Hat Virtualization installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. Note Only internal OpenShift Data Foundation clusters are supported on Red Hat Virtualization platform. See Planning your deployment for more information about deployment requirements. Based on your requirement, perform one of the following methods of deployment: Deploy using dynamic storage devices for the full deployment of OpenShift Data Foundation using dynamic storage devices. Deploy using local storage devices for the full deployment of OpenShift Data Foundation using local storage devices. Deploy standalone Multicloud Object Gateway component for deploying only the Multicloud Object Gateway component with OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/preface-ocs-rhev
RBAC APIs
RBAC APIs OpenShift Container Platform 4.16 Reference guide for RBAC APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/rbac_apis/index
Chapter 8. Yum
Chapter 8. Yum Yum is the Red Hat package manager that is able to query for information about available packages, fetch packages from repositories, install and uninstall them, and update an entire system to the latest available version. Yum performs automatic dependency resolution on packages you are updating, installing, or removing, and thus is able to automatically determine, fetch, and install all available dependent packages. Yum can be configured with new, additional repositories, or package sources , and also provides many plug-ins which enhance and extend its capabilities. Yum is able to perform many of the same tasks that RPM can; additionally, many of the command-line options are similar. Yum enables easy and simple package management on a single machine or on groups of them. The following sections assume your system was registered with Red Hat Subscription Management during installation as described in the Red Hat Enterprise Linux 6 Installation Guide . If your system is not subscribed, see Chapter 6, Registering the System and Managing Subscriptions . Important Yum provides secure package management by enabling GPG (Gnu Privacy Guard; also known as GnuPG) signature verification on GPG-signed packages to be turned on for all package repositories (i.e. package sources), or for individual repositories. When signature verification is enabled, Yum will refuse to install any packages not GPG-signed with the correct key for that repository. This means that you can trust that the RPM packages you download and install on your system are from a trusted source, such as Red Hat, and were not modified during transfer. See Section 8.4, "Configuring Yum and Yum Repositories" for details on enabling signature-checking with Yum, or Section B.3, "Checking a Package's Signature" for information on working with and verifying GPG-signed RPM packages in general. Yum also enables you to easily set up your own repositories of RPM packages for download and installation on other machines. Learning Yum is a worthwhile investment because it is often the fastest way to perform system administration tasks, and it provides capabilities beyond those provided by the PackageKit graphical package management tools. See Chapter 9, PackageKit for details on using PackageKit . Note You must have superuser privileges in order to use yum to install, update or remove packages on your system. All examples in this chapter assume that you have already obtained superuser privileges by using either the su or sudo command. 8.1. Checking For and Updating Packages 8.1.1. Checking For Updates To see which installed packages on your system have updates available, use the following command: yum check-update For example: The packages in the above output are listed as having updates available. The first package in the list is PackageKit , the graphical package manager. The line in the example output tells us: PackageKit - the name of the package x86_64 - the CPU architecture the package was built for 0.5.8 - the version of the updated package to be installed rhel - the repository in which the updated package is located The output also shows us that we can update the kernel (the kernel package), Yum and RPM themselves (the yum and rpm packages), as well as their dependencies (such as the kernel-firmware , rpm-libs , and rpm-python packages), all using yum . 8.1.2. Updating Packages You can choose to update a single package, multiple packages, or all packages at once. If any dependencies of the package (or packages) you update have updates available themselves, then they are updated too. Updating a Single Package To update a single package, run the following command as root : yum update package_name For example, to update the udev package, type: This output contains several items of interest: Loaded plugins: product-id, refresh-packagekit, subscription-manager - yum always informs you which Yum plug-ins are installed and enabled. See Section 8.5, "Yum Plug-ins" for general information on Yum plug-ins, or to Section 8.5.3, "Plug-in Descriptions" for descriptions of specific plug-ins. udev.x86_64 - you can download and install new udev package. yum presents the update information and then prompts you as to whether you want it to perform the update; yum runs interactively by default. If you already know which transactions the yum command plans to perform, you can use the -y option to automatically answer yes to any questions that yum asks (in which case it runs non-interactively). However, you should always examine which changes yum plans to make to the system so that you can easily troubleshoot any problems that might arise. If a transaction does go awry, you can view Yum's transaction history by using the yum history command as described in Section 8.3, "Working with Transaction History" . Important yum always installs a new kernel in the same sense that RPM installs a new kernel when you use the command rpm -i kernel . Therefore, you do not need to worry about the distinction between installing and upgrading a kernel package when you use yum : it will do the right thing, regardless of whether you are using the yum update or yum install command. When using RPM , on the other hand, it is important to use the rpm -i kernel command (which installs a new kernel) instead of rpm -u kernel (which replaces the current kernel). See Section B.2.2, "Installing and Upgrading" for more information on installing/upgrading kernels with RPM . Updating All Packages and Their Dependencies To update all packages and their dependencies, enter yum update (without any arguments): Updating Security-Related Packages Discovering which packages have security updates available and then updating those packages quickly and easily is important. Yum provides the plug-in for this purpose. The security plug-in extends the yum command with a set of highly-useful security-centric commands, subcommands and options. See Section 8.5.3, "Plug-in Descriptions" for specific information. Updating Packages Automatically It is also possible to set up periodical automatic updates for your packages. For this purpose, Red Hat Enterprise Linux 6 uses the yum-cron package. It provides a Yum interface for the cron daemon and downloads metadata from your package repositories. With the yum-cron service enabled, the user can schedule an automated daily Yum update as a cron job. Note The yum-cron package is provided by the Optional subscription channel. See Section 8.4.8, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. To install yum-cron issue the following command: By default, the yum-cron service is disabled and needs to be activated and started manually: To verify the status of the service, run the following command: The script included in the yum-cron package can be configured to change the extent and frequency of the updates, as well as to send notifications to e-mail. To customize yum-cron , edit the /etc/sysconfig/yum-cron file. Additional details and instructions for yum-cron can be found in the comments within /etc/sysconfig/yum-cron and at the yum-cron (8) manual page. 8.1.3. Preserving Configuration File Changes You will inevitably make changes to the configuration files installed by packages as you use your Red Hat Enterprise Linux system. RPM , which Yum uses to perform changes to the system, provides a mechanism for ensuring their integrity. See Section B.2.2, "Installing and Upgrading" for details on how to manage changes to configuration files across package upgrades. 8.1.4. Upgrading the System Off-line with ISO and Yum For systems that are disconnected from the Internet or Red Hat Network, using the yum update command with the Red Hat Enterprise Linux installation ISO image is an easy and quick way to upgrade systems to the latest minor version. The following steps illustrate the upgrading process: Create a target directory to mount your ISO image. This directory is not automatically created when mounting, so create it before proceeding to the step. As root , type: mkdir mount_dir Replace mount_dir with a path to the mount directory. Typically, users create it as a subdirectory in the /media directory. Mount the Red Hat Enterprise Linux 6 installation ISO image to the previously created target directory. As root , type: mount -o loop iso_name mount_dir Replace iso_name with a path to your ISO image and mount_dir with a path to the target directory. Here, the -o loop option is required to mount the file as a block device. Copy the media.repo file from the mount directory to the /etc/yum.repos.d/ directory. Note that configuration files in this directory must have the .repo extension to function properly. cp mount_dir /media.repo /etc/yum.repos.d/ new.repo This creates a configuration file for the yum repository. Replace new.repo with the filename, for example rhel6.repo . Edit the new configuration file so that it points to the Red Hat Enterprise Linux installation ISO. Add the following line into the /etc/yum.repos.d/ new.repo file: baseurl=file:/// mount_dir Replace mount_dir with a path to the mount point. Update all yum repositories including /etc/yum.repos.d/ new.repo created in steps. As root , type: yum update This upgrades your system to the version provided by the mounted ISO image. After successful upgrade, you can unmount the ISO image. As root , type: umount mount_dir where mount_dir is a path to your mount directory. Also, you can remove the mount directory created in the first step. As root , type: rmdir mount_dir If you will not use the previously created configuration file for another installation or update, you can remove it. As root , type: rm /etc/yum.repos.d/ new.repo Example 8.1. Upgrading from Red Hat Enterprise Linux 6.3 to 6.4 Imagine you need to upgrade your system without access to the Internet. To do so, you want to use an ISO image with the newer version of the system, called for instance RHEL6.4-Server-20130130.0-x86_64-DVD1.iso . A target directory created for mounting is /media/rhel6/ . As root , change into the directory with your ISO image and type: ~]# mount -o loop RHEL6.4-Server-20130130.0-x86_64-DVD1.iso /media/rhel6/ Then set up a yum repository for your image by copying the media.repo file from the mount directory: ~]# cp /media/rhel6/media.repo /etc/yum.repos.d/rhel6.repo To make yum recognize the mount point as a repository, add the following line into the /etc/yum.repos.d/rhel6.repo copied in the step: baseurl=file:///media/rhel6/ Now, updating the yum repository will upgrade your system to a version provided by RHEL6.4-Server-20130130.0-x86_64-DVD1.iso . As root , execute: ~]# yum update When your system is successfully upgraded, you can unmount the image, remove the target directory and the configuration file: ~]# umount /media/rhel6/ ~]# rmdir /media/rhel6/ ~]# rm /etc/yum.repos.d/rhel6.repo
[ "~]# yum check-update Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 PackageKit.x86_64 0.5.8-2.el6 rhel PackageKit-glib.x86_64 0.5.8-2.el6 rhel PackageKit-yum.x86_64 0.5.8-2.el6 rhel PackageKit-yum-plugin.x86_64 0.5.8-2.el6 rhel glibc.x86_64 2.11.90-20.el6 rhel glibc-common.x86_64 2.10.90-22 rhel kernel.x86_64 2.6.31-14.el6 rhel kernel-firmware.noarch 2.6.31-14.el6 rhel rpm.x86_64 4.7.1-5.el6 rhel rpm-libs.x86_64 4.7.1-5.el6 rhel rpm-python.x86_64 4.7.1-5.el6 rhel udev.x86_64 147-2.15.el6 rhel yum.noarch 3.2.24-4.el6 rhel", "~]# yum update udev Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package udev.x86_64 0:147-2.15.el6 set to be updated --> Finished Dependency Resolution Dependencies Resolved =========================================================================== Package Arch Version Repository Size =========================================================================== Updating: udev x86_64 147-2.15.el6 rhel 337 k Transaction Summary =========================================================================== Install 0 Package(s) Upgrade 1 Package(s) Total download size: 337 k Is this ok [y/N]:", "update", "~]# yum install yum-cron", "~]# chkconfig yum-cron on", "~]# service yum-cron start", "~]# service yum-cron status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-yum
Chapter 3. Distribution of content in RHEL 9
Chapter 3. Distribution of content in RHEL 9 3.1. Installation Red Hat Enterprise Linux 9 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Installation ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. On the Product Downloads page, the Installation ISO is referred to as Binary DVD . Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Installation ISO image. You can also register to Red Hat CDN or Satellite during the installation to use the latest BaseOS and AppStream content from Red Hat CDN or Satellite. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 9 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying operating system functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For more information, see the Scope of Coverage Details document. Content in the AppStream repository includes additional user-space applications, runtime languages, and databases in support of the varied workloads and use cases. In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 9 repositories and the packages they provide, see the Package manifest . 3.3. Application Streams Multiple versions of user-space components are delivered as Application Streams and updated more frequently than the core operating system packages. This provides greater flexibility to customize RHEL without impacting the underlying stability of the platform or specific deployments. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules, as Software Collections, or as Flatpaks. Each Application Stream component has a given life cycle, either the same as RHEL 9 or shorter. For RHEL life cycle information, see Red Hat Enterprise Linux Life Cycle . RHEL 9 improves the Application Streams experience by providing initial Application Stream versions that can be installed as RPM packages using the traditional dnf install command. Note Certain initial Application Streams in the RPM format have a shorter life cycle than Red Hat Enterprise Linux 9. Some additional Application Stream versions will be distributed as modules with a shorter life cycle in future minor RHEL 9 releases. Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Always determine what version of an Application Stream you want to install and make sure to review the Red Hat Enterprise Linux Application Stream Lifecycle first. Content that needs rapid updating, such as alternate compilers and container tools, is available in rolling streams that will not provide alternative versions in parallel. Rolling streams may be packaged as RPMs or modules. For information about Application Streams available in RHEL 9 and their application compatibility level, see the Package manifest . Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. 3.4. Package management with YUM/DNF In Red Hat Enterprise Linux 9, software installation is ensured by DNF . Red Hat continues to support the usage of the yum term for consistency with major versions of RHEL. If you type dnf instead of yum , the command works as expected because both are aliases for compatibility. Although RHEL 8 and RHEL 9 are based on DNF , they are compatible with YUM used in RHEL 7. For more information, see Managing software with the DNF tool .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/distribution
Chapter 9. Configuring JVM Settings
Chapter 9. Configuring JVM Settings Configuration of Java Virtual Machine (JVM) settings is different for a standalone JBoss EAP server, or a JBoss EAP server in a managed domain. For a standalone JBoss EAP server instance, the server startup processes pass JVM settings to the JBoss EAP server at startup. These can be declared from the command line before launching JBoss EAP, or using the System Properties page under Configuration in the management console. In a managed domain, JVM settings are declared in the host.xml and domain.xml configuration files, and can be configured at host, server group, or server levels. Note System properties must be configured in JAVA_OPTS to be used by JBoss EAP modules (such as the logging manager) during startup. 9.1. Configuring JVM Settings for a Standalone Server JVM settings for standalone JBoss EAP server instances can be declared at runtime by setting the JAVA_OPTS environment variable before starting the server. An example of setting the JAVA_OPTS environment variable on Linux is shown below: The same setting can be used in a Microsoft Windows environment: Alternatively, JVM settings can be added to the standalone.conf file, or standalone.conf.bat for Windows Server, in the EAP_HOME /bin folder, which contains examples of options to pass to the JVM. Besides setting the JAVA_OPTS environment variable, you can set system properties using one of the following alternative ways: Execute the following command: Edit the JBoss profile configuration file, standalone*.xml or domain.xml . Warning If system properties are set using multiple ways, the values in the JBoss profile configuration file, standalone*.xml or domain.xml override the other values, which may cause JBoss EAP startup issues. For example, if you have defined system settings in the JAVA_OPTS environment variable and in the JBoss profile configuration file, the values in JBoss profile configuration override the values in JAVA_OPTS . 9.2. Configuring JVM Settings for a Managed Domain In a JBoss EAP managed domain, you can define JVM settings at multiple levels. You can define custom JVM settings on a particular host, and then apply those settings to server groups, or to individual server instances. By default, server groups and individual servers will inherit the JVM settings from their parent, but you can choose to override JVM settings at each level. Note The JVM settings in domain.conf , or domain.conf.bat for Windows Server, are applied to the Java process of the JBoss EAP host controller, and not the individual JBoss EAP server instances controlled by that host controller. 9.2.1. Defining JVM Settings on a Host Controller You can define JVM settings on a host controller, and apply those settings to server groups or individual servers. JBoss EAP comes with a default JVM setting, but the following management CLI command demonstrates creating a new JVM setting named production_jvm with some custom JVM settings and options. See Managed Domain JVM Configuration Attributes for descriptions of all available options. You can also create and edit JVM settings in the JBoss EAP management console by navigating to Runtime Hosts , choosing a host and clicking View , and selecting the JVMs tab. These settings are stored in the within the <jvm> tag in host.xml . 9.2.2. Applying JVM Settings to a Server Group When creating a server group, you can specify a JVM configuration that all servers in the group will use. The following management CLI commands demonstrate creating a server group name groupA that uses the production_jvm JVM settings that were shown in the example . All servers in the server group will inherit JVM settings from production_jvm . You can also override specific JVM settings at the server group level. For example, to set a different heap size, you can use the following command: After applying the above command, the server group groupA will inherit the JVM settings from production_jvm , except for the heap size which has an overridden value of 1024m . See Managed Domain JVM Configuration Attributes for descriptions of all available options. You can also edit server group JVM settings in the JBoss EAP management console by navigating to Runtime Server Groups , choosing a server group and clicking View , and selecting the JVMs tab. These settings for a server group are stored in domain.xml . 9.2.3. Applying JVM Settings to an Individual Server By default, an individual JBoss EAP server instance will inherit the JVM settings of the server group it belongs to. However, you can choose to override the inherited settings with another complete JVM setting definition from the host controller, or choose to override specific JVM settings. For example, the following command overrides the JVM definition of the server group in the example , and sets the JVM settings for server-one to the default JVM definition: Also, similar to server groups, you can override specific JVM settings at the server level. For example, to set a different heap size, you can use the following command: See Managed Domain JVM Configuration Attributes for descriptions of all available options. You can also edit server JVM settings in the JBoss EAP management console by navigating to Runtime Hosts , choosing the host, clicking View on the server, and selecting the JVMs tab. These settings for an individual server are stored in host.xml . 9.3. Displaying the JVM Status You can view the status of JVM resources, such as heap and thread usage, for standalone or managed domain servers from the management console. While statistics are not displayed in real time, you can click Refresh to provide an up-to-date overview of JVM resources. To display the JVM status for a standalone JBoss EAP server: Navigate to the Runtime tab, select the server, and select Status . To display the JVM status for a JBoss EAP server in a managed domain: Navigate to Runtime Hosts , select the host and server, and select Status . This shows the following heap usage information: Max The maximum amount of memory that can be used for memory management. Used The amount of used memory. Committed The amount of memory that is committed for the Java Virtual Machine to use. Other information, such as JVM uptime and thread usage, is also available. 9.4. Tuning the JVM For tips on optimizing JVM performance, see the JVM Tuning section of the Performance Tuning Guide .
[ "export JAVA_OPTS=\"-Xmx1024M\"", "set JAVA_OPTS=\"Xmx1024M\"", "EAP_HOME/bin/standalone.sh -Dmyproperty=value", "/host= HOST_NAME /jvm=production_jvm:add(heap-size=2048m, max-heap-size=2048m, max-permgen-size=512m, stack-size=1024k, jvm-options=[\"-XX:-UseParallelGC\"])", "/server-group=groupA:add(profile=default, socket-binding-group=standard-sockets) /server-group=groupA/jvm=production_jvm:add", "/server-group=groupA/jvm=production_jvm:write-attribute(name=heap-size,value=\"1024m\")", "/host= HOST_NAME /server-config=server-one/jvm=default:add", "/host= HOST_NAME /server-config=server-one/jvm=default:write-attribute(name=heap-size,value=\"1024m\")" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_jvm_settings
Chapter 3. Reviewing automation execution environments with automation content navigator
Chapter 3. Reviewing automation execution environments with automation content navigator As a content developer, you can review your automation execution environment with automation content navigator and display the packages and collections included in the automation execution environments. Automation content navigator runs a playbook to extract and display the results. 3.1. Reviewing automation execution environments from automation content navigator You can review your automation execution environments with the automation content navigator text-based user interface. Prerequisites Automation execution environments Procedure Review the automation execution environments included in your automation content navigator configuration. USD ansible-navigator images Type the number of the automation execution environment you want to delve into for more details. You can review the packages and versions of each installed automation execution environment and the Ansible version any included collections. Optional: pass in the automation execution environment that you want to use. This becomes the primary and is the automation execution environment that automation content navigator uses. USD ansible-navigator images --eei registry.example.com/example-enterprise-ee:latest Verification Review the automation execution environment output.
[ "ansible-navigator images", "ansible-navigator images --eei registry.example.com/example-enterprise-ee:latest" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/assembly-review-ee-navigator_ansible-navigator
Chapter 5. Installing on Azure
Chapter 5. Installing on Azure 5.1. Preparing to install on Azure 5.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 5.1.2. Requirements for installing OpenShift Container Platform on Azure Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for Azure for other options. 5.1.3. Choosing a method to install OpenShift Container Platform on Azure You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 5.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Azure : You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Azure : You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on Azure with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on Azure into an existing VNet : You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure. Installing a private cluster on Azure : You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. Installing a cluster on Azure into a government region : OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. 5.1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure infrastructure that you provision, by using the following method: Installing a cluster on Azure using ARM templates : You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 5.1.4. steps Configuring an Azure account 5.2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 5.2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. OS Disk 7 VM OS disk must be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps for control plane machines. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3 , or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. 5.2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 5.2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 5.2.4. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles: User Access Administrator Contributor To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 5.2.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Assign the User Access Administrator role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.2.6. Supported Azure Marketplace regions Installing a cluster by using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster by using the Azure Marketplace image is not supported for the Azure Government regions. 5.2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 5.2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options. 5.3. Manually creating IAM for Azure In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 5.3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 5.3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the "Upgrading clusters with manually maintained credentials" section of the installation content for your cloud provider. 5.3.3. Upgrading clusters with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.8 to 4.9, this status prevents you from upgrading until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.9.0 to 4.9.1, no permissions are added or changed, so the upgrade is not blocked. Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. Additionally, you must review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed their permissions requirements, update the permissions as required. When all of the secrets are correct for the new release, indicate that the cluster is ready to upgrade: Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version you are upgrading to, in the format x.y.z . For example, 4.8.2 for OpenShift Container Platform 4.8.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verify that the CCO is upgradeable: In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , you can begin the OpenShift Container Platform upgrade. 5.3.4. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure 5.4. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.9, you can install a cluster on Microsoft Azure that uses the default configuration options. 5.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.5. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.9, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5.4. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the 99_openshift-cluster-api_worker-machineset-[0-2].yaml files in the section titled "Updating Manifests for Marketplace Installation" before completing the installation. 5.5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.5.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.5.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.5.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.5.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.5.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.5.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.4. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.5.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.5.6.3. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16 1 10 12 14 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.5.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.5.7. Updating manifests for Marketplace installation If you selected a Marketplace image for installation, you must create and modify the manifests to use the Marketplace image. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests by running the following command: USD openshift-install create manifests --dir <installation_dir> Edit the .spec.template.spec.providerSpec.value.image property of the compute machine set definitions, replacing the offer , publisher , sku , and version values with the details gathered in the section titled "Selecting an Azure Marketplace image". These are the three files that must be updated: <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-1.yaml <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-2.yaml In each file, replace the value of the .spec.template.spec.providerSpec.value.image.resourceID property with an empty value ( "" ). In each file, set the type property to MarketplaceWithPlan . Using the first machine set file as an example, the .spec.template.spec.providerSpec.value.image section must look like the following example: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker version: 4.8.2021122100 type: MarketplaceWithPlan 5.5.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.5.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.5.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.5.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.5.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.6. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.9, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 5.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.6.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.6.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.6. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.6.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.7. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.6.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.8. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.6.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.9. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.6.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.10. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.6.5.3. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 1 10 13 15 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 17 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.5.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 5.6.7. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 5.6.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.6.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.11. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.12. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 5.13. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 5.14. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. Table 5.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 5.16. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.6.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 5.6.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.6.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.6.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.6.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.6.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.7. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.9, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.7.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.9, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.7.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.7.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.17. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 5.7.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.7.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.7.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.7.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.18. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.7.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.19. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.7.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.20. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.7.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.21. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.7.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.22. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.7.6.3. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 1 10 12 18 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 14 If you use an existing VNet, specify the name of the resource group that contains it. 15 If you use an existing VNet, specify its name. 16 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 17 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.7.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.7.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.7.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.7.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.7.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.8. Installing a private cluster on Azure In OpenShift Container Platform version 4.9, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 5.8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 5.8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 5.8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an internal registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An internal registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 5.8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.9, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.23. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 5.8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.8.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.8.7. Manually creating the installation configuration file For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 5.8.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.8.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.24. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.8.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.25. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.8.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.26. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.8.7.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.27. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.8.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.28. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.8.7.3. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 1 10 12 19 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 13 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 14 If you use an existing VNet, specify the name of the resource group that contains it. 15 If you use an existing VNet, specify its name. 16 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 17 If you use an existing VNet, specify the name of the subnet to host the compute machines. 18 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 5.8.7.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.8.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.8.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.8.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.9. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.9, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 5.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 5.9.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 5.9.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 5.9.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 5.9.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 5.9.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an internal registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An internal registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 5.9.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.9, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.9.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.9.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.29. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 5.9.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.9.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.9.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.9.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.9.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.9.8. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 5.9.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.9.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.30. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.9.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.31. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.9.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.32. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.9.8.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.33. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.9.8.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.34. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.9.8.3. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: usgovvirginia resourceGroupName: existing_resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzureUSGovernmentCloud 18 pullSecret: '{"auths": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 1 10 19 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 12 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 13 If you use an existing VNet, specify the name of the resource group that contains it. 14 If you use an existing VNet, specify its name. 15 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 16 If you use an existing VNet, specify the name of the subnet to host the compute machines. 17 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 18 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 5.9.8.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.9.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.10. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.9, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 5.10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was last tested using version 2.38.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 5.10.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.9, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.10.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 5.10.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. OS Disk 7 VM OS disk must be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps for control plane machines. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3 , or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. 5.10.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 5.10.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 5.10.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.10.3.5. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles: User Access Administrator Contributor To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 5.10.3.6. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Assign the User Access Administrator role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.10.3.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 5.10.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 5.10.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 5.35. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 7.9, or RHEL 8.4. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 5.10.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.36. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 7.9, or RHEL 8.4 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and planned for removal in a future release of OpenShift Container Platform 4. Important You are required to use Azure virtual machines with premiumIO set to true . The machines must also have the hyperVGeneration property contain V1 . 5.10.5. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer and use them to update the 06_workers.json Azure Resource Manager (ARM) template. Update the storageProfile.imageReference field by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. You can find a sample template in the "Creating additional worker machines in Azure" section. 5.10.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.10.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 5.10.8. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 5.10.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 5.10.8.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.10.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.10.8.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 5.10.8.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 5.10.9. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" 5.10.10. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally; therefore, you must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Choose the RHCOS version to use and export the URL of its VHD to an environment variable: USD export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.9/data/data/rhcos.json | jq -r .azure.url` Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the chosen VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 5.10.11. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 5.10.12. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 5.10.12.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 5.1. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 5.10.13. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.13.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 5.2. 02_storage.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vhdBlobURL" : { "type" : "string", "metadata" : { "description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables" : { "location" : "[resourceGroup().location]", "imageName" : "[concat(parameters('baseName'), '-image')]" }, "resources" : [ { "apiVersion" : "2018-06-01", "type": "Microsoft.Compute/images", "name": "[variables('imageName')]", "location" : "[variables('location')]", "properties": { "storageProfile": { "osDisk": { "osType": "Linux", "osState": "Generalized", "blobUri": "[parameters('vhdBlobURL')]", "storageAccountType": "Standard_LRS" } } } } ] } 5.10.14. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 5.10.14.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 5.37. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.38. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.39. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 5.10.15. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 5.10.15.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 5.3. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "public-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 5.10.16. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters baseName="USD{INFRA_ID}" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The SSH RSA public key file as a string. 3 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.16.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 5.4. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string." } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 5.10.17. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 3 --parameters baseName="USD{INFRA_ID}" 4 1 The Ignition content for the control plane nodes. 2 The SSH RSA public key file as a string. 3 The name of the private DNS zone to which the control plane nodes are attached. 4 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.17.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 5.5. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone the master nodes are going to be attached to" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/SRV", "name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]", "location" : "[variables('location')]", "properties": { "ttl": 60, "copy": [{ "name": "srvRecords", "count": "[length(variables('vmNames'))]", "input": { "priority": 0, "weight" : 10, "port" : 2380, "target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]" } }] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "copy" : { "name" : "dnsCopy", "count" : "[length(variables('vmNames'))]" }, "name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]", "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]", "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 5.10.18. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 5.10.19. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note If you do not use the provided ARM template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters baseName="USD{INFRA_ID}" 3 1 The Ignition content for the worker nodes. 2 The SSH RSA public key file as a string. 3 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.19.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 5.6. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the each Node Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 5.10.20. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.10.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.10.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.10.23. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 5.10.24. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.9, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.11. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 5.11.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. While you can uninstall the cluster using the copy of the installation program that was used to deploy it, using OpenShift Container Platform version 4.13 or later is recommended. The removal of service principals is dependent on the Microsoft Azure AD Graph API. Using version 4.13 or later of the installation program ensures that service principals are removed without the need for manual intervention, if and when Microsoft decides to retire the Azure AD Graph API. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_dir>", "image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker version: 4.8.2021122100 type: MarketplaceWithPlan", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: centralus 12 resourceGroupName: existing_resource_group 13 networkResourceGroupName: vnet_resource_group 14 virtualNetwork: vnet 15 controlPlaneSubnet: control_plane_subnet 16 computeSubnet: compute_subnet 17 outboundType: UserDefinedRouting 18 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: resource_group 11 region: usgovvirginia resourceGroupName: existing_resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzureUSGovernmentCloud 18 pullSecret: '{\"auths\": ...}' 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5", "export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "export INFRA_ID=<infra_id> 1", "export RESOURCE_GROUP=<resource_group> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}", "az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity", "export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`", "export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`", "az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"", "az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS", "export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`", "export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.9/data/data/rhcos.json | jq -r .azure.url`", "az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"", "az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"", "az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }", "export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vhdBlobURL\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Compute/images\", \"name\": \"[variables('imageName')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"storageProfile\": { \"osDisk\": { \"osType\": \"Linux\", \"osState\": \"Generalized\", \"blobUri\": \"[parameters('vhdBlobURL')]\", \"storageAccountType\": \"Standard_LRS\" } } } } ] }", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }", "export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -n \"bootstrap.ign\" -o tsv`", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string.\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }", "export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 3 --parameters baseName=\"USD{INFRA_ID}\" 4", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone the master nodes are going to be attached to\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/SRV\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"ttl\": 60, \"copy\": [{ \"name\": \"srvRecords\", \"count\": \"[length(variables('vmNames'))]\", \"input\": { \"priority\": 0, \"weight\" : 10, \"port\" : 2380, \"target\" : \"[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]\" } }] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"copy\" : { \"name\" : \"dnsCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\": \"[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3", "{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20", "export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300", "az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-on-azure
Chapter 9. Intrusion Detection
Chapter 9. Intrusion Detection Valuable property needs to be protected from the prospect of theft and destruction. Some homes are equipped with alarm systems that can deter burglars, notify authorities when a break-in has occurred, and even warn owners when their home is on fire. Such measures are necessary to ensure the integrity of homes and the safety of homeowners. The same assurance of integrity and safety should also be applied to computer systems and data. The Internet has facilitated the flow of information, from personal to financial. At the same time, it has fostered just as many dangers. Malicious users and crackers seek vulnerable targets such as unpatched systems, systems infected with trojans, and networks running insecure services. Alarms are needed to notify administrators and security team members that a breach has taken place so that they can respond in real-time to the threat. Intrusion detection systems have been designed as such a warning system. 9.1. Defining Intrusion Detection Systems An intrusion detection system (IDS) is an active process or device that analyzes system and network activity for unauthorized entry and/or malicious activity. The way that an IDS detects anomalies can vary widely; however, the ultimate aim of any IDS is to catch perpetrators in the act before they do real damage to resources. An IDS protects a system from attack, misuse, and compromise. It can also monitor network activity, audit network and system configurations for vulnerabilities, analyze data integrity, and more. Depending on the detection methods you choose to deploy, there are several direct and incidental benefits to using an IDS. 9.1.1. IDS Types Understanding what an IDS is, and the functions it provides, is key in determining what type is appropriate to include in a computer security policy. This section discusses the concepts behind IDSes, the functionalities of each type of IDS, and the emergence of hybrid IDSes that employ several detection techniques and tools in one package. Some IDSes are knowledge-based , which preemptively alert security administrators before an intrusion occurs using a database of common attacks. Alternatively, there are behavioral-based IDSes that track all resource usage for anomalies, which is usually a positive sign of malicious activity. Some IDSes are standalone services that work in the background and passively listen for activity, logging any suspicious packets from the outside. Others combine standard system tools, modified configurations, and verbose logging, with administrator intuition and experience to create a powerful intrusion detection kit. Evaluating the many intrusion detection techniques can assist in finding one that is right for your organization. The most common types of IDSes referred to in the security field are known as host-based and network-based IDSes. A host-based IDS is the most comprehensive of the two, which involves implementing a detection system on each individual host. Regardless of which network environment the host resides on, it is still protected. A network-based IDS funnels packets through a single device before being sent to specific hosts. Network-based IDSes are often regarded as less comprehensive since many hosts in a mobile environment make it unavailable for reliable network packet screening and protection.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-detection
Chapter 7. AWS Redshift Sink
Chapter 7. AWS Redshift Sink Send data to an AWS Redshift Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 7.1. Configuration Options The following table summarizes the configuration options available for the aws-redshift-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured AWS Redshift Database string query * Query The Query to execute against the AWS Redshift Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured AWS Redshift Database string serverPort Server Port Server Port for the data source string 5439 Note Fields marked with an asterisk (*) are mandatory. 7.2. Dependencies At runtime, the aws-redshift-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:com.amazon.redshift:redshift-jdbc42:2.1.0.5 mvn:org.apache.commons:commons-dbcp2:2.7.0 7.3. Usage This section describes how you can use the aws-redshift-sink . 7.3.1. Knative Sink You can use the aws-redshift-sink Kamelet as a Knative sink by binding it to a Knative object. aws-redshift-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-redshift-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-redshift-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 7.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 7.3.1.2. Procedure for using the cluster CLI Save the aws-redshift-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-redshift-sink-binding.yaml 7.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 7.3.2. Kafka Sink You can use the aws-redshift-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-redshift-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-redshift-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-redshift-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 7.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 7.3.2.2. Procedure for using the cluster CLI Save the aws-redshift-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-redshift-sink-binding.yaml 7.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-redshift-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 7.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-redshift-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-redshift-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-redshift-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f aws-redshift-sink-binding.yaml", "kamel bind channel:mychannel aws-redshift-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-redshift-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-redshift-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f aws-redshift-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-redshift-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/aws-redshift-sink
Chapter 55. Using Ansible to delegate authentication for IdM users to external identity providers
Chapter 55. Using Ansible to delegate authentication for IdM users to external identity providers You can use the idp ansible-freeipa module to associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. If an IdP reference and an associated IdP user ID exist, you can use them to enable IdP authentication for an IdM user with the user ansible-freeipa module. Afterward, if these users authenticate with the SSSD version available in RHEL 9.1 or later, they receive RHEL Identity Management (IdM) single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. 55.1. The benefits of connecting IdM to an external IdP As an administrator, you might want to allow users stored in an external identity source, such as a cloud services provider, to access RHEL systems joined to your Identity Management (IdM) environment. To achieve this, you can delegate the authentication and authorization process of issuing Kerberos tickets for these users to that external entity. You can use this feature to expand IdM's capabilities and allow users stored in external identity providers (IdPs) to access Linux systems managed by IdM. 55.2. How IdM incorporates logins via external IdPs SSSD 2.7.0 contains the sssd-idp package, which implements the idp Kerberos pre-authentication method. This authentication method follows the OAuth 2.0 Device Authorization Grant flow to delegate authorization decisions to external IdPs: An IdM client user initiates OAuth 2.0 Device Authorization Grant flow, for example, by attempting to retrieve a Kerberos TGT with the kinit utility at the command line. A special code and website link are sent from the Authorization Server to the IdM KDC backend. The IdM client displays the link and the code to the user. In this example, the IdM client outputs the link and code on the command line. The user opens the website link in a browser, which can be on another host, a mobile phone, and so on: The user enters the special code. If necessary, the user logs in to the OAuth 2.0-based IdP. The user is prompted to authorize the client to access information. The user confirms access at the original device prompt. In this example, the user hits the Enter key at the command line. The IdM KDC backend polls the OAuth 2.0 Authorization Server for access to user information. What is supported: Logging in remotely via SSH with the keyboard-interactive authentication method enabled, which allows calling Pluggable Authentication Module (PAM) libraries. Logging in locally with the console via the logind service. Retrieving a Kerberos ticket-granting ticket (TGT) with the kinit utility. What is currently not supported: Logging in to the IdM WebUI directly. To log in to the IdM WebUI, you must first acquire a Kerberos ticket. Logging in to Cockpit WebUI directly. To log in to the Cockpit WebUI, you must first acquire a Kerberos ticket. Additional resources Authentication against external Identity Providers RFC 8628: OAuth 2.0 Device Authorization Grant 55.3. Using Ansible to create a reference to an external identity provider To connect external identity providers (IdPs) to your Identity Management (IdM) environment, create IdP references in IdM. Complete this procedure to use the idp ansible-freeipa module to configure a reference to the github external IdP. Prerequisites You have registered IdM as an OAuth application to your external IdP, and generated a client ID and client secret on the device that an IdM user will be using to authenticate to IdM. The example assumes that: my_github_account_name is the github user whose account the IdM user will be using to authenticate to IdM. The client ID is 2efe1acffe9e8ab869f4 . The client secret is 656a5228abc5f9545c85fa626aecbf69312d398c . Your IdM servers are using RHEL 9.1 or later. Your IdM servers are using SSSD 2.7.0 or later. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . Procedure On your Ansible control node, create an configure-external-idp-reference.yml playbook: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification On an IdM client, verify that the output of the ipa idp-show command shows the IdP reference you have created. steps Using Ansible to enable an IdM user to authenticate via an external IdP Additional resources The idp ansible-freeipa upstream documentation 55.4. Using Ansible to enable an IdM user to authenticate via an external IdP You can use the user ansible-freeipa module to enable an Identity Management (IdM) user to authenticate via an external identity provider (IdP). To do that, associate the external IdP reference you have previously created with the IdM user account. Complete this procedure to use Ansible to associate an external IdP reference named github_idp with the IdM user named idm-user-with-external-idp . As a result of the procedure, the user is able to use the my_github_account_name github identity to authenticate as idm-user-with-external-idp to IdM. Prerequisites Your IdM client and IdM servers are using RHEL 9.1 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Using Ansible to create a reference to an external identity provider . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. You are using RHEL 9.4 or later. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . Procedure On your Ansible control node, create an enable-user-to-authenticate-via-external-idp.yml playbook: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in to an IdM client and verify that the output of the ipa user-show command for the idm-user-with-external-idp user displays references to the IdP: Additional resources The idp ansible-freeipa upstream documentation 55.5. Retrieving an IdM ticket-granting ticket as an external IdP user If you have delegated authentication for an Identity Management (IdM) user to an external identity provider (IdP), the IdM user can request a Kerberos ticket-granting ticket (TGT) by authenticating to the external IdP. Complete this procedure to: Retrieve and store an anonymous Kerberos ticket locally. Request the TGT for the idm-user-with-external-idp user by using kinit with the -T option to enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 9.1 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Using Ansible to create a reference to an external identity provider . You have associated an external IdP reference with the user account. See Using Ansible to enable an IdM user to authenticate via an external IdP . The user that you are initially logged in as has write permissions on a directory in the local filesystem. Procedure Use Anonymous PKINIT to obtain a Kerberos ticket and store it in a file named ./fast.ccache . Optional: View the retrieved ticket: Begin authenticating as the IdM user, using the -T option to enable the FAST communication channel. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. The pa_type = 152 indicates external IdP authentication. 55.6. Logging in to an IdM client via SSH as an external IdP user To log in to an IdM client via SSH as an external identity provider (IdP) user, begin the login process on the command linel. When prompted, perform the authentication process at the website associated with the IdP, and finish the process at the Identity Management (IdM) client. Prerequisites Your IdM client and IdM servers are using RHEL 9.1 or later. Your IdM client and IdM servers are using SSSD 2.7.0 or later. You have created a reference to an external IdP in IdM. See Using Ansible to create a reference to an external identity provider . You have associated an external IdP reference with the user account. See Using Ansible to enable an IdM user to authenticate via an external IdP . Procedure Attempt to log in to the IdM client via SSH. In a browser, authenticate as the user at the website provided in the command output. At the command line, press the Enter key to finish the authentication process. Verification Display your Kerberos ticket information and confirm that the line config: pa_type shows 152 for pre-authentication with an external IdP. 55.7. The provider option in the ipaidp Ansible module The following identity providers (IdPs) support OAuth 2.0 device authorization grant flow: Microsoft Identity Platform, including Azure AD Google GitHub Keycloak, including Red Hat Single Sign-On (SSO) Okta When using the idp ansible-freeipa module to create a reference to one of these external IdPs, you can specify the IdP type with the provider option in your ipaidp ansible-freeipa playbook task, which expands into additional options as described below: provider: microsoft Microsoft Azure IdPs allow parametrization based on the Azure tenant ID, which you can specify with the organization option. If you need support for the live.com IdP, specify the option organization common . Choosing provider: microsoft expands to use the following options. The value of the organization option replaces the string USD{ipaidporg} in the table. Option Value auth_uri: URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/authorize dev_auth_uri: URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/devicecode token_uri: URI https://login.microsoftonline.com/USD{ipaidporg}/oauth2/v2.0/token userinfo_uri: URI https://graph.microsoft.com/oidc/userinfo keys_uri: URI https://login.microsoftonline.com/common/discovery/v2.0/keys scope: STR openid email idp_user_id: STR email provider: google Choosing provider: google expands to use the following options: Option Value auth_uri: URI https://accounts.google.com/o/oauth2/auth dev_auth_uri: URI https://oauth2.googleapis.com/device/code token_uri: URI https://oauth2.googleapis.com/token userinfo_uri: URI https://openidconnect.googleapis.com/v1/userinfo keys_uri: URI https://www.googleapis.com/oauth2/v3/certs scope: STR openid email idp_user_id: STR email provider: github Choosing provider: github expands to use the following options: Option Value auth_uri: URI https://github.com/login/oauth/authorize dev_auth_uri: URI https://github.com/login/device/code token_uri: URI https://github.com/login/oauth/access_token userinfo_uri: URI https://openidconnect.googleapis.com/v1/userinfo keys_uri: URI https://api.github.com/user scope: STR user idp_user_id: STR login provider: keycloak With Keycloak, you can define multiple realms or organizations. Since it is often a part of a custom deployment, both base URL and realm ID are required, and you can specify them with the base_url and organization options in your ipaidp playbook task: Choosing provider: keycloak expands to use the following options. The value you specify in the base_url option replaces the string USD{ipaidpbaseurl} in the table, and the value you specify for the organization `option replaces the string `USD{ipaidporg} . Option Value auth_uri: URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth dev_auth_uri: URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/auth/device token_uri: URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/token userinfo_uri: URI https://USD{ipaidpbaseurl}/realms/USD{ipaidporg}/protocol/openid-connect/userinfo scope: STR openid email idp_user_id: STR email provider: okta After registering a new organization in Okta, a new base URL is associated with it. You can specify this base URL with the base_url option in the ipaidp playbook task: Choosing provider: okta expands to use the following options. The value you specify for the base_url option replaces the string USD{ipaidpbaseurl} in the table. Option Value auth_uri: URI https://USD{ipaidpbaseurl}/oauth2/v1/authorize dev_auth_uri: URI https://USD{ipaidpbaseurl}/oauth2/v1/device/authorize token_uri: URI https://USD{ipaidpbaseurl}/oauth2/v1/token userinfo_uri: URI https://USD{ipaidpbaseurl}/oauth2/v1/userinfo scope: STR openid email idp_user_id: STR email Additional resources Pre-populated IdP templates
[ "--- - name: Configure external IdP hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure a reference to github external provider is available ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: github_idp provider: github client_ID: 2efe1acffe9e8ab869f4 secret: 656a5228abc5f9545c85fa626aecbf69312d398c idp_user_id: my_github_account_name", "ansible-playbook --vault-password-file=password_file -v -i inventory configure-external-idp-reference.yml", "[idmuser@idmclient ~]USD ipa idp-show github_idp", "--- - name: Ensure an IdM user uses an external IdP to authenticate to IdM hosts: ipaserver become: false gather_facts: false tasks: - name: Retrieve Github user ID ansible.builtin.uri: url: \"https://api.github.com/users/my_github_account_name\" method: GET headers: Accept: \"application/vnd.github.v3+json\" register: user_data - name: Ensure IdM user exists with an external IdP authentication ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm-user-with-external-idp first: Example last: User userauthtype: idp idp: github_idp idp_user_id: my_github_account_name", "ansible-playbook --vault-password-file=password_file -v -i inventory enable-user-to-authenticate-via-external-idp.yml", "ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Example Last name: User Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: github External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False", "kinit -n -c ./fast.ccache", "klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]", "kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:", "klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152", "[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.", "[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152", "--- - name: Playbook to manage IPA idp hosts: ipaserver become: false tasks: - name: Ensure keycloak idp my-keycloak-idp is present using provider ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my-keycloak-idp provider: keycloak organization: main base_url: keycloak.domain.com:8443/auth client_id: my-keycloak-client-id", "--- - name: Playbook to manage IPA idp hosts: ipaserver become: false tasks: - name: Ensure okta idp my-okta-idp is present using provider ipaidp: ipaadmin_password: \"{{ ipaadmin_password }}\" name: my-okta-idp provider: okta base_url: dev-12345.okta.com client_id: my-okta-client-id" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/using-ansible-to-delegate-authentication-for-idm-users-to-external-identity-providers_managing-users-groups-hosts