title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Deploying OpenShift Data Foundation using Microsoft Azure
Deploying OpenShift Data Foundation using Microsoft Azure Red Hat OpenShift Data Foundation 4.15 Instructions on deploying OpenShift Data Foundation using Microsoft Azure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Microsoft Azure.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_microsoft_azure/index
Chapter 9. Preinstallation validations
Chapter 9. Preinstallation validations 9.1. Definition of preinstallation validations The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation. The Assisted Installer uses the information provided before installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install. When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer. The Assisted Installer uses all of this information to compute real time preinstallation validations. All validations are either blocking or non-blocking to the installation. 9.2. Blocking and non-blocking validations A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed. A non-blocking validation is a warning and will tell you of things that might cause you a problem. 9.3. Validation types The Assisted Installer performs two types of validation: Host Host validations ensure that the configuration of a given host is valid for installation. Cluster Cluster validations ensure that the configuration of the whole cluster is valid for installation. 9.4. Host validations 9.4.1. Getting host validations by using the REST API Note If you use the web console, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have hosts booted with the discovery ISO You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[])' Get non-passing validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[]) | map(select(.status=="failure" or .status=="pending")) | select(length>0)' 9.4.2. Host validations in detail Parameter Validation type Description connected non-blocking Checks that the host has recently communicated with the Assisted Installer. has-inventory non-blocking Checks that the Assisted Installer received the inventory from the host. has-min-cpu-cores non-blocking Checks that the number of CPU cores meets the minimum requirements. has-min-memory non-blocking Checks that the amount of memory meets the minimum requirements. has-min-valid-disks non-blocking Checks that at least one available disk meets the eligibility criteria. has-cpu-cores-for-role blocking Checks that the number of cores meets the minimum requirements for the host role. has-memory-for-role blocking Checks that the amount of memory meets the minimum requirements for the host role. ignition-downloadable blocking For Day 2 hosts, checks that the host can download ignition configuration from the Day 1 cluster. belongs-to-majority-group blocking The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, Day 1 cluster are in the majority group. valid-platform-network-settings blocking Checks that the platform is valid for the network settings. ntp-synced non-blocking Checks if an NTP server has been successfully used to synchronize time on the host. container-images-available non-blocking Checks if container images have been successfully pulled from the image registry. sufficient-installation-disk-speed blocking Checks that disk speed metrics from an earlier installation meet requirements, if they exist. sufficient-network-latency-requirement-for-role blocking Checks that the average network latency between hosts in the cluster meets the requirements. sufficient-packet-loss-requirement-for-role blocking Checks that the network packet loss between hosts in the cluster meets the requirements. has-default-route blocking Checks that the host has a default route configured. api-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. api-int-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. apps-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. compatible-with-cluster-platform non-blocking Checks that the host is compatible with the cluster platform dns-wildcard-not-configured blocking Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift disk-encryption-requirements-satisfied non-blocking Checks that the type of host and disk encryption configured meet the requirements. non-overlapping-subnets blocking Checks that this host does not have any overlapping subnets. hostname-unique blocking Checks that the hostname is unique in the cluster. hostname-valid blocking Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. The hostname must have 63 characters or less. The hostname must start and end with a lowercase alphanumeric character. The hostname must have only lowercase alphanumeric characters, dashes, and periods. belongs-to-machine-cidr blocking Checks that the host IP is in the address range of the machine CIDR. lso-requirements-satisfied blocking Validates that the host meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the host meets the requirements of the OpenShift Data Foundation Operator. Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires an eligible disk. This is a disk with at least 25GB that is not the installation disk and is of type SSD or HDD . All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the host meets the requirements of Container Native Virtualization. The BIOS of the host must have CPU virtualization enabled. Host must have enough CPU cores and RAM available for Container Native Virtualization. Will validate the Host Path Provisioner if necessary. lvm-requirements-satisfied blocking Validates that the host meets the requirements of the Logical Volume Manager Storage Operator. Host has at least one additional empty disk, not partitioned and not formatted. vsphere-disk-uuid-enabled non-blocking Verifies that each valid disk sets disk.EnableUUID to TRUE . In vSphere this will result in each disk having a UUID. compatible-agent blocking Checks that the discovery agent version is compatible with the agent docker image version. no-skip-installation-disk blocking Checks that installation disk is not skipping disk formatting. no-skip-missing-disk blocking Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. media-connected blocking Checks the connection of the installation media to the host. machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. id-platform-network-settings blocking Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. mtu-valid non-blocking Checks the maximum transmission unit (MTU) of hosts and networking devices in the cluster environment to identify compatibility issues. For more information, see Additional resources . Additional resources Changing the MTU for the cluster network 9.5. Cluster validations 9.5.1. Getting cluster validations by using the REST API If you use the web console, many of these validations will not show up by name. To obtain a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq 'map(.[])' Get non-passing cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq '. | map(.[] | select(.status=="failure" or .status=="pending")) | select(length>0)' 9.5.2. Cluster validations in detail Parameter Validation type Description machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. cluster-cidr-defined non-blocking Checks that the cluster network definition exists for the cluster. service-cidr-defined non-blocking Checks that the service network definition exists for the cluster. no-cidrs-overlapping blocking Checks that the defined networks do not overlap. networks-same-address-families blocking Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) network-prefix-valid blocking Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. machine-cidr-equals-to-calculated-cidr blocking For a non user managed networking cluster. Checks that apiVIPs or ingressVIPs are members of the machine CIDR if they exist. api-vips-defined non-blocking For a non user managed networking cluster. Checks that apiVIPs exist. api-vips-valid blocking For a non user managed networking cluster. Checks if the apiVIPs belong to the machine CIDR and are not in use. ingress-vips-defined blocking For a non user managed networking cluster. Checks that ingressVIPs exist. ingress-vips-valid non-blocking For a non user managed networking cluster. Checks if the ingressVIPs belong to the machine CIDR and are not in use. all-hosts-are-ready-to-install blocking Checks that all hosts in the cluster are in the "ready to install" status. sufficient-masters-count blocking For a multi-node OpenShift Container Platform installation, checks that the current number of hosts in the cluster designated either manually or automatically to be control plane (master) nodes equals the number that the user defined for the cluster as the control_plane_count value. For a single-node OpenShift installation, checks that there is exactly one control plane (master) node and no compute (worker) nodes. dns-domain-defined non-blocking Checks that the base DNS domain exists for the cluster. pull-secret-set non-blocking Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. ntp-server-configured blocking Checks that each of the host clocks are no more than 4 minutes out of sync with each other. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the OpenShift Data Foundation Operator. The cluster has either at least three control plane (master) nodes and no compute (worker) nodes at all ( compact mode), or at least three control plane (master) nodes and at least three compute (worker) nodes ( standard mode). Each host running ODF workloads (control plane nodes in compact mode, compute nodes in standard mode) requires a non-installation disk of type SSD` or HDD and with at least 25GB of storage. All hosts must have manually assigned roles. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The CPU architecture for the cluster is x86 lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Storage Operator. The cluster must be single node. The cluster must be running Openshift >= 4.11.0. network-type-valid blocking Checks the validity of the network type if it exists. The network type must be OpenshiftSDN (OpenShift Container Platform 4.14 or earlier) or OVNKubernetes. OpenshiftSDN does not support IPv6 or Single Node Openshift. OpenshiftSDN is not supported for OpenShift Container Platform 4.15 and later releases. OVNKubernetes does not support VIP DHCP allocation.
[ "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'", "source refresh-token", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_preinstallation-validations
Providing feedback on Red Hat JBoss Core Services documentation
Providing feedback on Red Hat JBoss Core Services documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/providing-direct-documentation-feedback_2.4.57-release-notes
Chapter 17. Configuring the IO subsystem
Chapter 17. Configuring the IO subsystem You can configure the io subsystem in JBoss EAP to enhance system performance and support application operations. By managing XNIO workers and buffer pools, you can ensure optimal functionality for subsystems like Undertow and Remoting. Properly adjusting these components might enable efficient data handling and responsiveness in your applications 17.1. IO Subsystem configuration The io subsystem defines the XNIO workers and buffer pools used by other subsystems, such as Undertow and Remoting. These workers and buffer pools are defined in the io subsystem by using the following CLI commands: /subsystem=io/worker=* /subsystem=io/buffer-pool=* This configuration represents the default setup for the io subsystem. To view the default configuration of the io subsystem, run the following CLI command: /subsystem=io/:read-resource(recursive=true) 17.2. Configuring a worker You can configure a worker in JBoss EAP to efficiently manage IO tasks and worker threads. Workers serve as XNIO worker instances, providing an abstraction layer for the Java NIO APIs and supporting SSL. Workers are responsible for managing IO operations, coordinating tasks, and ensuring that requests for sending and receiving data are processed efficiently. These tasks are handled by a set of threads maintained in an IO thread pool. By default, JBoss EAP includes a single worker named default . You can define additional workers if required. When creating more than one worker, be aware that each additional worker results in a separate IO thread pool, which can impact resource utilization. When no thread sizes are specified for a worker, JBoss EAP calculates the default values based on the number of available CPU cores. The configuration options are as follows: io-threads : Specify the number of IO threads to create for the worker. If not specified, the default is calculated as cpuCount * 2 . task-max-threads : Specify the maximum number of threads for the worker task thread pool. If not specified, the default value is calculated as cpuCount * 16 . You can manage workers through management CLI commands to update, create, or delete configurations. Prerequisites JBoss EAP is running. Procedure Update an existing worker by using the following command: Reload the server to apply the changes by using the following command: Create a new worker by using the following command: If needed, you can delete a worker by using the following command: Reload the server to apply the changes by using the following command: Additional resources IO subsystem attributes 17.3. Configuring a buffer pool You can configure a buffer pool in JBoss EAP to manage pooled NIO buffer instances, which play a crucial role in application performance. You can update existing buffer pools, create new ones, and delete those no longer needed to optimize system efficiency. Note IO buffer pools are deprecated but remain the default configuration in the current release. Buffer pools are pooled NIO buffer instances. Changing the buffer size significantly impacts application performance. For most servers, the ideal buffer size is typically 16k. For more information, see Configuring byte buffer Pool section of the Configuration Guide for JBoss EAP. For more information, see Add lightweight global buffer pool API; deprecate heavier API , which describes the deprecation of the earlier buffer pool API and its replacement with the Undertow subsystem buffer pool. Prerequisites JBoss EAP is running. Procedure Update an existing buffer pool by using the following command: Reload the server to apply the changes by using the following command: Create a new buffer pool by using the following command: If needed, you can delete a buffer pool by using the following command: Reload the server to apply the changes by using the following command: Additional resources IO subsystem attributes 17.4. IO subsystem performance tuning You can optimize and monitor the performance of the io subsystem to ensure efficient resource management and responsiveness. Regular monitoring helps identify potential issues before they affect system performance. Additional resources IO Subsystem Tuning Tuning the Undertow Thread Pool in JBoss EAP 7.x and 8.x 17.5. Configuring a Worker Workers are XNIO worker instances. An XNIO worker instance is an abstraction layer for the Java NIO APIs, which provide functionality such as management of IO and worker threads as well as SSL support. By default, JBoss EAP provides single worker called default , but more can be defined. Updating an Existing Worker To update an existing worker: Creating a New Worker To create a new worker: Deleting a Worker To delete a worker: For a full list of the attributes available for configuring workers, please see the IO Subsystem Attributes section. 17.6. Configuring a Buffer Pool Note IO buffer pools are deprecated, but they are still set as the default in the current release. Buffer pools are pooled NIO buffer instances. Changing the buffer size has a big impact on application performance. For most servers, the ideal buffer size is usually 16k. For more information about configuring Undertow byte buffer pools, see the Configuring Byte Buffer Pools section of the Configuration Guide for JBoss EAP. Updating an Existing Buffer Pool To update an existing buffer pool: Creating a Buffer Pool To create a new buffer pool: Deleting a Buffer Pool To delete a buffer pool: For a full list of the attributes available for configuring buffer pools, please see the IO Subsystem Attributes section. 17.7. Tuning the IO Subsystem For tips on monitoring and optimizing performance for the io subsystem, see the IO Subsystem Tuning section of the Performance tuning for JBoss EAP .
[ "/subsystem=io/worker=* /subsystem=io/buffer-pool=*", "/subsystem=io/:read-resource(recursive=true)", "/subsystem=io/worker=default:write-attribute(name=io-threads,value=10)", "reload", "/subsystem=io/worker=newWorker:add", "/subsystem=io/worker=newWorker:remove", "reload", "/subsystem=io/buffer-pool=default:write-attribute(name=direct-buffers,value=true)", "reload", "/subsystem=io/buffer-pool=newBuffer:add", "/subsystem=io/buffer-pool=newBuffer:remove", "reload", "/subsystem=io/worker=default:write-attribute(name=io-threads,value=10)", "reload", "/subsystem=io/worker=newWorker:add", "/subsystem=io/worker=newWorker:remove", "reload", "/subsystem=io/buffer-pool=default:write-attribute(name=direct-buffers,value=true)", "reload", "/subsystem=io/buffer-pool=newBuffer:add", "/subsystem=io/buffer-pool=newBuffer:remove", "reload" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/configuring-the-io-subsystem_jakarta-connectors-management
23.2. Red Hat Data Grid Server CLI
23.2. Red Hat Data Grid Server CLI Red Hat JBoss Data Grid includes a new Remote Client-Server mode CLI. This CLI can only be used for specific use cases, such as manipulating the server subsystem for the following: configuration management obtaining metrics Report a bug 23.2.1. Start the Server Mode CLI Use the following commands to run the JBoss Data Grid Server CLI from the command line: For Linux: For Windows: Report a bug
[ "JDG_HOME/bin/cli.sh", "C:\\>JDG_HOME\\bin\\cli.bat" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-Red_Hat_Data_Grid_Server_CLI
Chapter 3. Migrating Operator deployments on Openshift
Chapter 3. Migrating Operator deployments on Openshift To adapt to the revamped server configuration, the Red Hat build of Keycloak Operator was completely recreated. The Operator provides full integration with Red Hat build of Keycloak, but it is not backward compatible with the Red Hat Single Sign-On 7.6 Operator. Using the new Operator requires creating a new Red Hat build of Keycloak deployment. For full details, see the Operator Guide . 3.1. Prerequisites The instance of Red Hat Single Sign-On 7.6 was shut down so that it does not use the same database instance that will be used by Red Hat build of Keycloak . In case the unsupported embedded database (that is managed by the Red Hat Single Sign-On 7.6 Operator)) was used, it has been converted to an external database that is provisioned by the user. Database backup was created. You reviewed the Release Notes . 3.2. Migration process Install Red Hat build of Keycloak Operator to the namespace. Create new CRs and related Secrets. Manually migrate your Red Hat Single Sign-On 7.6 configuration to your new Keycloak CR. If custom providers were used, migrate them and create a custom Red Hat build of Keycloak container image to include them. If custom themes were used, migrate them and create a custom Red Hat build of Keycloak container image to include them. 3.3. Migrating Keycloak CR Keycloak CR now supports all server configuration options. All relevant options are available as first class citizen fields directly under the spec of the CR. All options in the CR follow the same naming conventions as the server options making the experience between bare metal and Operator deployments seamless. Additionally, you can define any options that are missing from the CR in the additionalOptions field such as SPI providers configuration. Another option is to use podTemplate , a Technology Preview field, to modify the raw Kubernetes deployment pod template in case a supported alternative does not exist as a first class citizen field in the CR. The following shows an example Keycloak CR to deploy Red Hat build of Keycloak through the Operator: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org additionalOptions: - name: spi-connections-http-client-default-connection-pool-size value: 20 Notice the resemblance with CLI configuration: ./kc.sh start --db=postgres --db-url-host=postgres-db --db-username=user --db-password=pass --https-certificate-file=mycertfile --https-certificate-key-file=myprivatekey --hostname=test.keycloak.org --spi-connections-http-client-default-connection-pool-size=20 Additional resources Basic Keycloak deployment Advanced configuration 3.3.1. Migrating database configuration Red Hat build of Keycloak can use the same database instance as was previously used by Red Hat Single Sign-On 7.6. The database schema will be migrated automatically the first time Red Hat build of Keycloak connects to it. Warning Migrating the embedded database managed by Red Hat Single Sign-On 7.6 Operator is not supported. In the Red Hat Single Sign-On 7.6 Operator, the external database connection was configured using a Secret, for example: apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: keycloak labels: app: sso stringData: POSTGRES_DATABASE: kc-db-name POSTGRES_EXTERNAL_ADDRESS: my-postgres-hostname POSTGRES_EXTERNAL_PORT: 5432 POSTGRES_USERNAME: user POSTGRES_PASSWORD: pass type: Opaque In Red Hat build of Keycloak , the database is configured directly in the Keycloak CR with credentials referenced as Secrets, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres host: my-postgres-hostname port: 5432 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password ... apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret stringData: username: "user" password: "pass" type: Opaque 3.3.1.1. Supported database vendors Red Hat Single Sign-On 7.6 Operator supported only PostgreSQL databases, but the Red Hat build of Keycloak Operator supports all database vendors that are supported by the server. 3.3.2. Migrating TLS configuration Red Hat Single Sign-On 7.6 Operator by default configured the server to use the TLS Secret generated by OpenShift CA. Red Hat build of Keycloak Operator does not make any assumptions around TLS to meet production best practices and requires users to provide their own TLS certificate and key pair, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: tlsSecret: example-tls-secret ... The expected format of the secret referred to in tlsSecret should use the standard Kubernetes TLS Secret ( kubernetes.io/tls ) type. The Red Hat Single Sign-On 7.6 Operator used the reencrypt TLS termination strategy by default on Route. Red Hat build of Keycloak Operator uses the passthrough strategy by default. Additionally, the Red Hat Single Sign-On 7.6 Operator supported configuring TLS termination. Red Hat build of Keycloak Operator does not support TLS termination in the current release. If the default Operator-managed Route does not meet desired TLS configuration, a custom Route needs to be created by the user and the default one disabled as: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false ... 3.3.3. Using a custom image for extensions To reflect best practices and support immutable containers, the Red Hat build of Keycloak Operator no longer supports specifying extensions in the Keycloak CR. In order to deploy an extension, an optimized custom image must be built. Keycloak CR now includes a dedicated field, image , for specifying Red Hat build of Keycloak images, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: image: quay.io/my-company/my-keycloak:latest ... Note When specifying a custom image, the Operator assumes it is already optimized and does not perform the costly optimization at each server start. Additional resources Using custom Keycloak images in the Operator Creating a customized and optimized container image 3.3.4. Upgrade strategy option removed The Red Hat Single Sign-On 7.6 Operator supported recreate and rolling strategies when performing a server upgrade. This approach was not practical. It was up to the user to choose if the Red Hat Single Sign-On 7.6 Operator should scale down the deployment before performing an upgrade and database migration. It was not clear to the users when the rolling strategy could be safely used. Therefore, this option was removed in the Red Hat build of Keycloak Operator and it always implicitly performs the recreate strategy, which scales down the whole deployment before creating Pods with the new server container image to ensure only a single server version accesses the database. 3.3.5. Health endpoint enabled by default Red Hat build of Keycloak configures the server to expose health endpoints by default that are used by OpenShift probes. The endpoints are exposed on the management interface that uses a dedicated port (9000 by default) and is accessible from within OpenShift, but not exposed by the Operator as a Route. 3.3.6. Migrating advanced deployment options using Pod templates The Red Hat Single Sign-On 7.6 Operator exposed multiple low-level fields for deployment configuration, such as volumes. Red Hat build of Keycloak Operator is more opinionated and does not expose most of these fields. However, it is still possible to configure any desired deployment fields specified as the podTemplate , for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: foo: "bar" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: test-secret ... Note The spec.unsupported.podTemplate field is a Technology Preview feature. It offers only limited support as it exposes low-level configuration where full functionality has not been tested under all conditions. Whenever possible, use the fully supported first class citizen fields in the top level of the CR spec. For example, instead of spec.unsupported.podTemplate.spec.imagePullSecrets , use directly spec.imagePullSecrets . 3.3.7. Connecting to an external instance is no longer supported The Red Hat Single Sign-On 7.6 Operator supported connecting to an external instance of Red Hat Single Sign-On 7.6. For example, creating clients within an existing realm through Client CRs is no longer supported in the Red Hat build of Keycloak Operator. 3.3.8. Migrating Horizontal Pod Autoscaler enabled deployments To use a Horizontal Pod Autoscaler (HPA) with Red Hat Single Sign-On 7.6, it was necessary to set the disableReplicasSyncing: true field in the Keycloak CR and scale the server StatefulSet. This is no longer necessary as the Keycloak CR in Red Hat build of Keycloak Operator can be scaled directly by an HPA. 3.4. Migrating the Keycloak realm CR The Realm CR was replaced by the Realm Import CR, which offers similar functionality and has a similar schema. The Realm Import CR offers only Realm bootstrapping and as such no longer supports Realm deletion. It also does not support updates, similarly to the Realm CR. Full Realm representation is now included in the Realm Import CR, in comparison to the Realm CR that offered only a few selected fields. Example of Red Hat Single Sign-On 7.6 Realm CR: apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: example-keycloakrealm spec: instanceSelector: matchLabels: app: sso realm: id: "basic" realm: "basic" enabled: True displayName: "Basic Realm" Example of corresponding Red Hat build of Keycloak Realm Import CR: apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: example-keycloakrealm spec: keycloakCRName: example-kc realm: id: "basic" realm: "basic" enabled: True displayName: "Basic Realm" Additional resources Realm Import 3.5. Removed CRs The Client and User CRs were removed from Red Hat build of Keycloak Operator. The lack of these CRs can be partially mitigated by the new Realm Import CR. Adding support for Client CRs is on the road-map for a future Red Hat build of Keycloak release, while User CRs are not currently a planned feature. The Client and User CRs were removed from Red Hat build of Keycloak Operator. The lack of these CRs can be partially mitigated by the new Realm Import CR, which allows initial creation (but not updates or deletion) of virtually any realm resource. Adding support for Client CRs is on the road-map for a future Red Hat build of Keycloak release, while User CRs are not currently a planned feature. which allows initial creation (but not updates or deletion) of virtually any Realm resource.
[ "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org additionalOptions: - name: spi-connections-http-client-default-connection-pool-size value: 20", "./kc.sh start --db=postgres --db-url-host=postgres-db --db-username=user --db-password=pass --https-certificate-file=mycertfile --https-certificate-key-file=myprivatekey --hostname=test.keycloak.org --spi-connections-http-client-default-connection-pool-size=20", "apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: keycloak labels: app: sso stringData: POSTGRES_DATABASE: kc-db-name POSTGRES_EXTERNAL_ADDRESS: my-postgres-hostname POSTGRES_EXTERNAL_PORT: 5432 POSTGRES_USERNAME: user POSTGRES_PASSWORD: pass type: Opaque", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres host: my-postgres-hostname port: 5432 usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret stringData: username: \"user\" password: \"pass\" type: Opaque", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: tlsSecret: example-tls-secret", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: image: quay.io/my-company/my-keycloak:latest", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: foo: \"bar\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: test-secret", "apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: example-keycloakrealm spec: instanceSelector: matchLabels: app: sso realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"", "apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: example-keycloakrealm spec: keycloakCRName: example-kc realm: id: \"basic\" realm: \"basic\" enabled: True displayName: \"Basic Realm\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/migration_guide/migrating-operator
Building, running, and managing containers
Building, running, and managing containers Red Hat Enterprise Linux 8 Using Podman, Buildah, and Skopeo on Red Hat Enterprise Linux 8 Red Hat Customer Content Services
[ "subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: <username> Password: <password>", "subscription-manager attach --auto", "subscription-manager attach --pool <PoolID>", "yum module install -y container-tools", "yum install podman-docker", "yum install podman -y", "useradd -c \"Joe Jones\" joe passwd joe", "ssh [email protected]", "podman pull registry.access.redhat.com/ubi8/ubi", "podman run --rm --name=myubi registry.access.redhat.com/ubi8/ubi cat /etc/os-release NAME=\"Red Hat Enterprise Linux\" VERSION=\"8 (Plow)\"", "usermod --add-subuids 200000-201000 --add-subgids 200000-201000 <username>", "grep <username> /etc/subuid /etc/subgid /etc/subuid: <username> :200000:1001 /etc/subgid: <username> :200000:1001", "podman run -d --cap-add SYS_TIME ntpd", "podman run -d httpd", "podman run -d -p 80:80 httpd", "echo 80 > /proc/sys/net/ipv4/ip_unprivileged_port_start", "microdnf module enable nodejs:14 Downloading metadata Enabling module streams: nodejs:14 Running transaction test", "podman pull <registry>[:<port>]/[<namespace>/]<name>:<tag>", "podman info -f json | jq '.registries[\"search\"]' [ \"registry.access.redhat.com\", \"registry.redhat.io\", \"docker.io\" ]", "unqualified-search-registries = [\"registry.access.redhat.com\", \"registry.redhat.io\", \"docker.io\"] short-name-mode = \"permissive\"", "[[registry]] location=\"localhost:5000\" insecure=true", "[[registry]] location = \"registry.example.org\" blocked = true", "[[registry]] location = \"registry.example.org\" prefix=\"registry.example.org/namespace\" blocked = true", "[[registry]] location = \"registry.example.org\" prefix=\"registry.example.org/namespace/image\" blocked = true", "[[registry]] location=\"registry.example.com\" [[registry.mirror]] location=\"mirror-1.com\" [[registry.mirror]] location=\"mirror-2.com\"", "podman login quay.io", "podman search quay.io/postgresql-10 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED redhat.io registry.redhat.io/rhel8/postgresql-10 This container image ... 0 redhat.io registry.redhat.io/rhscl/postgresql-10-rhel7 PostgreSQL is an ... 0", "podman search quay.io/", "podman search postgresql-10", "podman login registry.redhat.io Username: <username> Password: <password> Login Succeeded!", "podman pull registry.redhat.io/ubi8/ubi", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB", "unqualified-search-registries=[\"registry.fedoraproject.org\", \"quay.io\"] [aliases] \"fedora\"=\"registry.fedoraproject.org/fedora\"", "podman pull fedora Resolved \"fedora\" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull registry.fedoraproject.org/fedora:latest... Storing signatures", "podman pull nginx ? Please select an image: registry.access.redhat.com/nginx:latest registry.redhat.io/nginx:latest ▸ docker.io/library/nginx:latest ✔ docker.io/library/nginx:latest Trying to pull docker.io/library/nginx:latest... Storing signatures", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.fedoraproject.org/fedora latest 28317703decd 12 days ago 184 MB docker.io/library/nginx latest 08b152afcfae 13 days ago 137 MB", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 3269c37eae33 6 weeks ago 208 MB", "podman inspect registry.redhat.io/ubi8/ubi ... \"Cmd\": [ \"/bin/bash\" ], \"Labels\": { \"architecture\": \"x86_64\", \"build-date\": \"2020-12-10T01:59:40.343735\", \"com.redhat.build-host\": \"cpt-1002.osbs.prod.upshift.rdu2.redhat.com\", \"com.redhat.component\": \"ubi8-container\", \"com.redhat.license_terms\": \"https://www.redhat.com/..., \"description\": \"The Universal Base Image is }", "skopeo inspect docker://registry.redhat.io/ubi8/ubi-init { \"Name\": \"registry.redhat.io/ubi8/ubi8-init\", \"Digest\": \"sha256:c6d1e50ab...\", \"RepoTags\": [ \"latest\" ], \"Created\": \"2020-12-10T07:16:37.250312Z\", \"DockerVersion\": \"1.13.1\", \"Labels\": { \"architecture\": \"x86_64\", \"build-date\": \"2020-12-10T07:16:11.378348\", \"com.redhat.build-host\": \"cpt-1007.osbs.prod.upshift.rdu2.redhat.com\", \"com.redhat.component\": \"ubi8-init-container\", \"com.redhat.license_terms\": \"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\", \"description\": \"The Universal Base Image Init is designed to run an init system as PID 1 for running multi-services inside a container } }", "skopeo copy docker://quay.io/skopeo/stable:latest docker://registry.example.com/skopeo:latest", "mkdir -p /var/lib/images/nginx", "skopeo copy docker://docker.io/nginx:latest dir:/var/lib/images/nginx", "ls /var/lib/images/nginx 08b11a3d692c1a2e15ae840f2c15c18308dcb079aa5320e15d46b62015c0f6f3 4fcb23e29ba19bf305d0d4b35412625fea51e82292ec7312f9be724cb6e31ffd manifest.json version", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB", "podman tag registry.redhat.io/ubi8/ubi myubi", "podman tag 3269c37eae33 myubi", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 2 months ago 208 MB localhost/myubi latest 3269c37eae33 2 months ago 208 MB", "podman tag registry.redhat.io/ubi8/ubi myubi:8", "podman tag 3269c37eae33 myubi:8", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 2 months ago 208 MB localhost/myubi latest 3269c37eae33 2 months ago 208 MB localhost/myubi 8 3269c37eae33 2 months ago 208 MB", "podman build --platform linux/arm64,linux/amd64 --manifest <registry> / <image> .", "podman manifest push <registry> / <image>", "podman manifest inspect <registry>/<image>", "podman save -o myrsyslog.tar registry.redhat.io/rhel8/rsyslog:latest", "podman save -o myrsyslog-oci.tar --format=oci-archive registry.redhat.io/rhel8/rsyslog", "file myrsyslog.tar myrsyslog.tar: POSIX tar archive", "podman load -i myrsyslog.tar Loaded image(s): registry.redhat.io/rhel8/rsyslog:latest", "podman tag registry.redhat.io/ubi8/ubi registry.example.com:5000/ubi8/ubi", "podman push registry.example.com:5000/ubi8/ubi", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/rhel8/rsyslog latest 4b32d14201de 7 weeks ago 228 MB registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB localhost/myubi X.Y 3269c37eae33 7 weeks ago 208 MB", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ccd6001166e registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 6 seconds ago Up 5 seconds ago mysyslog", "podman stop mysyslog 7ccd6001166e9720c47fbeb077e0afd0bb635e74a1b0ede3fd34d09eaf5a52e9", "podman rmi registry.redhat.io/rhel8/rsyslog", "podman rmi registry.redhat.io/rhel8/rsyslog registry.redhat.io/ubi8/ubi", "podman rmi -a", "podman rmi -f 1de7d7b3f531 1de7d7b3f531", "run [options] image [command [arg ...]]", "podman run --rm registry.access.redhat.com/ubi8/ubi cat /etc/os-release NAME=\"Red Hat Enterprise Linux\" ID=\"rhel\" HOME_URL=\"https://www.redhat.com/\" BUG_REPORT_URL=\"https://bugzilla.redhat.com/\" REDHAT_BUGZILLA_PRODUCT=\" Red Hat Enterprise Linux 8\"", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES", "podman run --name=myubi -it registry.access.redhat.com/ubi8/ubi /bin/bash", "yum install procps-ng", "ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 12:55 pts/0 00:00:00 /bin/bash root 31 1 0 13:07 pts/0 00:00:00 ps -ef", "exit", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1984555a2c27 registry.redhat.io/ubi8/ubi:latest /bin/bash 21 minutes ago Exited (0) 21 minutes ago myubi", "podman run -d registry.redhat.io/rhel8/rsyslog", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 74b1da000a11 rhel8/rsyslog /bin/rsyslog.sh 2 minutes ago Up About a minute musing_brown", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA d65aecc325a4 ubi8/ubi /bin/bash 3 secs ago Exited (0) 5 secs ago peaceful_hopper false 74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute musing_brown false", "podman start myubi", "podman start -a -i myubi", "exit", "podman inspect 64ad95327c74 [ { \"Id\": \"64ad95327c740ad9de468d551c50b6d906344027a0e645927256cd061049f681\", \"Created\": \"2021-03-02T11:23:54.591685515+01:00\", \"Path\": \"/bin/rsyslog.sh\", \"Args\": [ \"/bin/rsyslog.sh\" ], \"State\": { \"OciVersion\": \"1.0.2-dev\", \"Status\": \"running\",", "podman inspect --format='{{.State.StartedAt}}' 64ad95327c74 2021-03-02 11:23:54.945071961 +0100 CET", "podman run --name=\"log_test\" -v /dev/log:/dev/log --rm registry.redhat.io/ubi8/ubi logger \"Testing logging to the host\"", "journalctl -b | grep Testing Dec 09 16:55:00 localhost.localdomain root[14634]: Testing logging to the host", "podman run -d --name=mysyslog registry.redhat.io/rhel8/rsyslog", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c56ef6a256f8 registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 20 minutes ago Up 20 minutes ago mysyslog", "podman mount mysyslog /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged", "ls /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var", "cat /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged/etc/os-release NAME=\"Red Hat Enterprise Linux\" VERSION=\"8 (Ootpa)\" ID=\"rhel\" ID_LIKE=\"fedora\"", "podman run -d --ip=10.88.0.44 registry.access.redhat.com/rhel8/rsyslog efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85", "podman inspect efde5f0a8c723 | grep 10.88.0.44 \"IPAddress\": \"10.88.0.44\",", "podman exec -it myrsyslog rpm -qa tzdata-2020d-1.el8.noarch python3-pip-wheel-9.0.3-18.el8.noarch redhat-release-8.3-1.0.el8.x86_64 filesystem-3.8-3.el8.x86_64", "podman exec -it myrsyslog /bin/bash", "yum install procps-ng", "ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 10:23 ? 00:00:01 /usr/sbin/rsyslogd -n root 8 0 0 11:07 pts/0 00:00:00 /bin/bash root 47 8 0 11:13 pts/0 00:00:00 ps -ef", "df -h Filesystem Size Used Avail Use% Mounted on fuse-overlayfs 27G 7.1G 20G 27% / tmpfs 64M 0 64M 0% /dev tmpfs 269M 936K 268M 1% /etc/hosts shm 63M 0 63M 0% /dev/shm", "uname -r 4.18.0-240.10.1.el8_3.x86_64", "free --mega total used free shared buff/cache available Mem: 2818 615 1183 12 1020 1957 Swap: 3124 0 3124", "podman volume create hostvolume", "podman volume inspect hostvolume [ { \"name\": \"hostvolume\", \"labels\": {}, \"mountpoint\": \"/home/username/.local/share/containers/storage/volumes/hostvolume/_data\", \"driver\": \"local\", \"options\": {}, \"scope\": \"local\" } ]", "echo \"Hello from host\" >> USDmntPoint/host.txt", "ls USDmntPoint/ host.txt", "podman run -it --name myubi1 -v hostvolume:/containervolume1 registry.access.redhat.com/ubi8/ubi /bin/bash", "ls /containervolume1 host.txt", "echo \"Hello from container 1\" >> /containervolume1/container1.txt", "ls USDmntPoint container1.rxt host.txt", "podman run -it --name myubi2 -v hostvolume:/containervolume2 registry.access.redhat.com/ubi8/ubi /bin/bash", "ls /containervolume2 container1.txt host.txt", "echo \"Hello from container 2\" >> /containervolume2/container2.txt", "ls USDmntPoint container1.rxt container2.txt host.txt", "podman run -dt --name=myubi registry.access.redhat.com/8/ubi", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a6a6d4896142 registry.access.redhat.com/8:latest /bin/bash 7 seconds ago Up 7 seconds ago myubi", "podman attach myubi", "echo \"hello\" > testfile", "podman export -o myubi.tar a6a6d4896142", "ls -l -rw-r--r--. 1 user user 210885120 Apr 6 10:50 myubi-container.tar", "mkdir myubi-container tar -xf myubi-container.tar -C myubi-container tree -L 1 myubi-container ├── bin -> usr/bin ├── boot ├── dev ├── etc ├── home ├── lib -> usr/lib ├── lib64 -> usr/lib64 ├── lost+found ├── media ├── mnt ├── opt ├── proc ├── root ├── run ├── sbin -> usr/sbin ├── srv ├── sys ├── testfile ├── tmp ├── usr └── var 20 directories, 1 file", "podman import myubi.tar myubi-imported Getting image source signatures Copying blob 277cab30fe96 done Copying config c296689a17 done Writing manifest to image destination Storing signatures c296689a17da2f33bf9d16071911636d7ce4d63f329741db679c3f41537e7cbf", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myubi-imported latest c296689a17da 51 seconds ago 211 MB", "podman run -it --name=myubi-imported docker.io/library/myubi-imported cat testfile hello", "podman stop myubi", "podman stop 1984555a2c27", "*podman kill --signal=\"SIGHUP\" 74b1da000a11* 74b1da000a114015886c557deec8bed9dfb80c888097aa83f30ca4074ff55fb2", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES IS INFRA d65aecc325a4 ubi8/ubi /bin/bash 3 secs ago Exited (0) 5 secs ago peaceful_hopper false 74b1da000a11 rhel8/rsyslog rsyslog.sh 2 mins ago Up About a minute musing_brown false", "podman rm peaceful_hopper", "podman stop musing_brown podman rm musing_brown", "podman rm clever_yonath furious_shockley", "podman rm -a", "#!/bin/bash if id -nG \"USDUSER\" 2> /dev/null | grep -qw \"USDGROUP\" 2> /dev/null ; then exit 0 else exit 1 fi", "podman run image", "external preexec hook /etc/containers/pre-exec-hooks/001-check-groups.sh failed", "podman pull registry.access.redhat.com/ubi8/ubi", "podman export USD(podman create registry.access.redhat.com/ubi8/ubi) > rhel.tar", "mkdir -p bundle/rootfs", "tar -C bundle/rootfs -xf rhel.tar", "<runtime> spec -b bundle", "vi bundle/config.json", "<runtime> create -b bundle/ myubi", "<runtime> start myubi", "<runtime> list ID PID STATUS BUNDLE CREATED OWNER myubi 0 stopped /root/bundle 2021-09-14T09:52:26.659714605Z root", "podman pull registry.access.redhat.com/ubi8/ubi", "podman run --name=myubi -dt --runtime=<runtime> ubi8 e4654eb4df12ac031f1d0f2657dc4ae6ff8eb0085bf114623b66cc664072e69b", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4654eb4df12 registry.access.redhat.com/ubi8:latest bash 4 seconds ago Up 4 seconds ago myubi", "podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>", "vim /etc/containers/containers.conf [engine] runtime = \" <runtime> \"", "podman run --name=myubi -dt ubi8 bash Resolved \"ubi8\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest... Storing signatures", "podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>", "FROM registry.access.redhat.com/ubi8/ubi-init RUN yum -y install httpd; yum clean all; systemctl enable httpd; RUN echo \"Successful Web Server Test\" > /var/www/html/index.html RUN mkdir /etc/systemd/system/httpd.service.d/; echo -e '[Service]\\nRestart=always' > /etc/systemd/system/httpd.service.d/httpd.conf EXPOSE 80 CMD [ \"/sbin/init\" ]", "podman build --format=docker -t mysysd .", "setsebool -P container_manage_cgroup 1", "podman run -d --name=mysysd_run -p 80:80 mysysd", "podman run -d --name=mysysd -p 8081:80 mysysd", "sysctl net.ipv4.ip_unprivileged_port_start=80", "podman ps a282b0c2ad3d localhost/mysysd:latest /sbin/init 15 seconds ago Up 14 seconds ago 0.0.0.0:80->80/tcp mysysd_run", "curl localhost/index.html Successful Web Server Test", "microcontainer=USD(buildah from registry.access.redhat.com/ubi8/ubi-micro)", "micromount=USD(buildah mount USDmicrocontainer)", "yum install --installroot USDmicromount --releasever 8 --setopt install_weak_deps=false --nodocs -y httpd yum clean all --installroot USDmicromount", "buildah umount USDmicrocontainer", "buildah commit USDmicrocontainer ubi-micro-httpd", "podman images ubi-micro-httpd localhost/ubi-micro-httpd latest 7c557e7fbe9f 22 minutes ago 151 MB", "cat /usr/share/containers/mounts.conf /usr/share/rhel/secrets:/run/secrets", "podman run -it --name myubi registry.access.redhat.com/ubi8/ubi", "yum install --disablerepo= * --enablerepo=ubi-8-appstream-rpms --enablerepo=ubi-8-baseos-rpms bzip2", "yum install zsh", "yum install --enablerepo=codeready-builder-for-rhel-8-x86_64-rpms python38-devel", "yum repolist", "rpm -qa", "podman run -it --name myubimin registry.access.redhat.com/ubi8/ubi-minimal", "microdnf install bzip2 --setopt install_weak_deps=false", "microdnf install --enablerepo=codeready-builder-for-rhel-8-x86_64-rpms python38-devel --setopt install_weak_deps=false", "microdnf repolist", "rpm -qa", "podman run -it --name myubi registry.access.redhat.com/ubi8/ubi yum install bzip2", "podman run -it --name myubimin registry.access.redhat.com/ubi8/ubi-minimal microdnf install bzip2", "yum repolist", "microdnf repolist", "rpm -qa", "FROM registry.access.redhat.com/ubi8/ubi USER root LABEL maintainer=\"John Doe\" Update image RUN yum update --disablerepo=* --enablerepo=ubi-8-appstream-rpms --enablerepo=ubi-8-baseos-rpms -y && rm -rf /var/cache/yum RUN yum install --disablerepo=* --enablerepo=ubi-8-appstream-rpms --enablerepo=ubi-8-baseos-rpms httpd -y && rm -rf /var/cache/yum Add default Web page and expose port RUN echo \"The Web Server is Running\" > /var/www/html/index.html EXPOSE 80 Start the service CMD [\"-D\", \"FOREGROUND\"] ENTRYPOINT [\"/usr/sbin/httpd\"]", "buildah bud -t johndoe/webserver . STEP 1: FROM registry.access.redhat.com/ubi8/ubi:latest STEP 2: USER root STEP 3: LABEL maintainer=\"John Doe\" STEP 4: RUN yum update --disablerepo=* --enablerepo=ubi-8-appstream-rpms --enablerepo=ubi-8-baseos-rpms -y Writing manifest to image destination Storing signatures --> f9874f27050 f9874f270500c255b950e751e53d37c6f8f6dba13425d42f30c2a8ef26b769f2", "podman run -d --name=myweb -p 80:80 johndoe/webserver bbe98c71d18720d966e4567949888dc4fb86eec7d304e785d5177168a5965f64", "curl http://localhost/index.html The Web Server is Running", "skopeo copy docker://registry.access.redhat.com/ubi8:8.1-397-source dir:USDHOME/TEST Copying blob 477bc8106765 done Copying blob c438818481d3 done Writing manifest to image destination Storing signatures", "skopeo inspect dir:USDHOME/TEST { \"Digest\": \"sha256:7ab721ef3305271bbb629a6db065c59bbeb87bc53e7cbf88e2953a1217ba7322\", \"RepoTags\": [], \"Created\": \"2020-02-11T12:14:18.612461174Z\", \"DockerVersion\": \"\", \"Labels\": null, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [ \"sha256:1ae73d938ab9f11718d0f6a4148eb07d38ac1c0a70b1d03e751de8bf3c2c87fa\", \"sha256:9fe966885cb8712c47efe5ecc2eaa0797a0d5ffb8b119c4bd4b400cc9e255421\", \"sha256:61b2527a4b836a4efbb82dfd449c0556c0f769570a6c02e112f88f8bbcd90166\", \"sha256:cc56c782b513e2bdd2cc2af77b69e13df4ab624ddb856c4d086206b46b9b9e5f\", \"sha256:dcf9396fdada4e6c1ce667b306b7f08a83c9e6b39d0955c481b8ea5b2a465b32\", \"sha256:feb6d2ae252402ea6a6fca8a158a7d32c7e4572db0e6e5a5eab15d4e0777951e\" ], \"Env\": null }", "cd USDHOME/TEST for f in USD(ls); do tar xvf USDf; done", "find blobs/ rpm_dir/ blobs/ blobs/sha256 blobs/sha256/10914f1fff060ce31388f5ab963871870535aaaa551629f5ad182384d60fdf82 rpm_dir/ rpm_dir/gzip-1.9-4.el8.src.rpm", "cat /etc/containers/registries.d/default.yaml docker: <registry>: lookaside: https://registry-lookaside.example.com lookaside-staging: file:///var/lib/containers/sigstore", "gpg --full-gen-key", "gpg --output <path>/key.gpg --armor --export <[email protected]>", "podman build -t <registry>/<namespace>/<image>", "podman push --sign-by <[email protected]> <registry>/<namespace>/<image>", "(cd /var/lib/containers/sigstore/; find . -type f) ./ <image> @sha256= <digest> /signature-1", "rsync -a /var/lib/containers/sigstore <[email protected]> :/registry-lookaside/webroot/sigstore", "cat /etc/containers/registries.d/default.yaml docker: <registry>: lookaside: https://registry-lookaside.example.com lookaside-staging: file:///var/lib/containers/sigstore", "podman image trust set -f <path> /key.gpg <registry>/<namespace>", "cat /etc/containers/policy.json { \"transports\": { \"docker\": { \"<registry>/<namespace>\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"<path>/key.gpg\" } ] } } }", "podman pull <registry>/<namespace>/<image> Storing signatures e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a", "skopeo generate-sigstore-key --output-prefix myKey", "git clone -b v2.0.0 https://github.com/sigstore/cosign cd cosign make ./cosign", "./cosign generate-key-pair Private key written to cosign.key Public key written to cosign.pub", "docker: <registry>: use-sigstore-attachments: true", "podman build -t <registry>/<namespace>/<image>", "podman push --sign-by-sigstore-private-key ./myKey.private <registry>/<namespace>/image>", "docker: <registry>: use-sigstore-attachments: true", "\"transports\": { \"docker\": { \" <registry>/<namespace> \": [ { \"type\": \"sigstoreSigned\", \"keyPath\": \"/some/path/to/cosign.pub\" } ] } }", "podman pull <registry>/<namespace>/<image>", "docker: <registry>: use-sigstore-attachments: true", "fulcio: fulcioURL: \"https://<your-fulcio-server>\" oidcMode: \"interactive\" oidcIssuerURL: \"https://<your-OIDC-provider>\" oidcClientID: \"sigstore\" rekorURL: \"https://<your-rekor-server>\"", "podman push --sign-by-sigstore= file.yml <registry>/<namespace>/<image>", "docker: <registry>: use-sigstore-attachments: true", "{ \"transports\": { \"docker\": { \"<registry>/<namespace>\": [ { \"type\": \"sigstoreSigned\", \"fulcio\": { \"caPath\": \"/path/to/local/CA/file\", \"oidcIssuer\": \"https://expected.OIDC.issuer/\", \"subjectEmail\", \"[email protected]\", }, \"rekorPublicKeyPath\": \"/path/to/local/public/key/file\", } ] } } }", "podman pull <registry>/<namespace>/<image>", "skopeo generate-sigstore-key --output-prefix myKey", "docker: <registry>: use-sigstore-attachments: true", "podman build -t <registry>/<namespace>/<image>", "privateKeyFile: \"/home/user/sigstore/ myKey.private \" privateKeyPassphraseFile: \"/mnt/user/sigstore- myKey -passphrase\" rekorURL: \"https://<your-rekor-server>\"", "podman push --sign-by-sigstore= file.yml <registry>/<namespace>/<image>", "cosign verify <registry>/<namespace>/<image> --key myKey.pub", "{ \"transports\": { \"docker\": { \"<registry>/<namespace>\": [ { \"type\": \"sigstoreSigned\", \"rekorPublicKeyPath\": \"/path/to/local/public/key/file\", } ] } } }", "podman pull <registry>/<namespace>/<image>", "podman network ls NETWORK ID NAME VERSION PLUGINS 2f259bab93aa podman 0.4.0 bridge,portmap,firewall,tuning", "podman network inspect podman [ { \"cniVersion\": \"0.4.0\", \"name\": \"podman\", \"plugins\": [ { \"bridge\": \"cni-podman0\", \"hairpinMode\": true, \"ipMasq\": true, \"ipam\": { \"ranges\": [ [ { \"gateway\": \"10.88.0.1\", \"subnet\": \"10.88.0.0/16\" } ] ], \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"type\": \"host-local\" }, \"isGateway\": true, \"type\": \"bridge\" }, { \"capabilities\": { \"portMappings\": true }, \"type\": \"portmap\" }, { \"type\": \"firewall\" }, { \"type\": \"tuning\" } ] } ]", "podman network create mynet /etc/cni/net.d/mynet.conflist", "podman network ls NETWORK ID NAME VERSION PLUGINS 2f259bab93aa podman 0.4.0 bridge,portmap,firewall,tuning 11c844f95e28 mynet 0.4.0 bridge,portmap,firewall,tuning,dnsname", "podman network connect mynet mycontainer", "podman inspect --format='{{.NetworkSettings.Networks}}' mycontainer map[podman:0xc00042ab40 mynet:0xc00042ac60]", "podman network disconnect mynet mycontainer", "podman inspect --format='{{.NetworkSettings.Networks}}' mycontainer map[podman:0xc000537440]", "podman network ls NETWORK ID NAME VERSION PLUGINS 2f259bab93aa podman 0.4.0 bridge,portmap,firewall,tuning 11c844f95e28 mynet 0.4.0 bridge,portmap,firewall,tuning,dnsname", "podman network rm mynet mynet", "podman network ls NETWORK ID NAME VERSION PLUGINS 2f259bab93aa podman 0.4.0 bridge,portmap,firewall,tuning", "podman network prune WARNING! This will remove all networks not used by at least one container. Are you sure you want to continue? [y/N] y", "podman network ls NETWORK ID NAME VERSION PLUGINS 2f259bab93aa podman 0.4.0 bridge,portmap,firewall,tuning", "podman pod create --name mypod 223df6b390b4ea87a090a4b5207f7b9b003187a6960bd37631ae9bc12c433aff The pod is in the initial state Created.", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 223df6b390b4 mypod Created Less than a second ago 1 3afdcd93de3e", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 3afdcd93de3e registry.access.redhat.com/ubi8/pause Less than a second ago Created 223df6b390b4-infra 223df6b390b4", "podman run -dt --name myubi --pod mypod registry.access.redhat.com/ubi8/ubi /bin/bash 5df5c48fea87860cf75822ceab8370548b04c78be9fc156570949013863ccf71", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 223df6b390b4 mypod Running Less than a second ago 2 3afdcd93de3e", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 5df5c48fea87 registry.access.redhat.com/ubi8/ubi:latest /bin/bash Less than a second ago Up Less than a second ago myubi 223df6b390b4 3afdcd93de3e registry.access.redhat.com/ubi8/pause Less than a second ago Up Less than a second ago 223df6b390b4-infra 223df6b390b4", "podman pod top mypod USER PID PPID %CPU ELAPSED TTY TIME COMMAND 0 1 0 0.000 24.077433518s ? 0s /pause root 1 0 0.000 24.078146025s pts/0 0s /bin/bash", "podman pod stats -a --no-stream ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS a9f807ffaacd frosty_hodgkin -- 3.092MB / 16.7GB 0.02% -- / -- -- / -- 2 3b33001239ee sleepy_stallman -- -- / -- -- -- / -- -- / -- --", "podman pod inspect mypod { \"Id\": \"db99446fa9c6d10b973d1ce55a42a6850357e0cd447d9bac5627bb2516b5b19a\", \"Name\": \"mypod\", \"Created\": \"2020-09-08T10:35:07.536541534+02:00\", \"CreateCommand\": [ \"podman\", \"pod\", \"create\", \"--name\", \"mypod\" ], \"State\": \"Running\", \"Hostname\": \"mypod\", \"CreateCgroup\": false, \"CgroupParent\": \"/libpod_parent\", \"CgroupPath\": \"/libpod_parent/db99446fa9c6d10b973d1ce55a42a6850357e0cd447d9bac5627bb2516b5b19a\", \"CreateInfra\": false, \"InfraContainerID\": \"891c54f70783dcad596d888040700d93f3ead01921894bc19c10b0a03c738ff7\", \"SharedNamespaces\": [ \"uts\", \"ipc\", \"net\" ], \"NumContainers\": 2, \"Containers\": [ { \"Id\": \"891c54f70783dcad596d888040700d93f3ead01921894bc19c10b0a03c738ff7\", \"Name\": \"db99446fa9c6-infra\", \"State\": \"running\" }, { \"Id\": \"effc5bbcfe505b522e3bf8fbb5705a39f94a455a66fd81e542bcc27d39727d2d\", \"Name\": \"myubi\", \"State\": \"running\" } ] }", "podman pod stop mypod", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 5df5c48fea87 registry.redhat.io/ubi8/ubi:latest /bin/bash About a minute ago Exited (0) 7 seconds ago myubi 223df6b390b4 mypod 3afdcd93de3e registry.access.redhat.com/8/pause About a minute ago Exited (0) 7 seconds ago 8a4e6527ac9d-infra 223df6b390b4 mypod", "podman pod rm mypod 223df6b390b4ea87a090a4b5207f7b9b003187a6960bd37631ae9bc12c433aff", "podman ps podman pod ps", "podman inspect --format='{{.NetworkSettings.IPAddress}}' <containerName>", "podman inspect --format='{{.NetworkSettings.Networks}}' <containerName>", "podman inspect --format='{{.NetworkSettings.Ports}}' <containerName>", "podman run -dt --name=web-container docker.io/library/httpd", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b8c057333513 docker.io/library/httpd:latest httpd-foreground 4 seconds ago Up 5 seconds ago web-container", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web-container 10.88.0.2", "podman run -it --name=myubi ubi8/ubi curl 10.88.0.2:80 <html><body><h1>It works!</h1></body></html>", "podman network inspect podman | grep bridge \"bridge\": \"cni-podman0\", \"type\": \"bridge\"", "ip addr show cni-podman0 6: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 62:af:a1:0a:ca:2e brd ff:ff:ff:ff:ff:ff inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0 valid_lft forever preferred_lft forever inet6 fe80::60af:a1ff:fe0a:ca2e/64 scope link valid_lft forever preferred_lft forever", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web-container 10.88.0.2", "curl 10.88.0.2:80 <html><body><h1>It works!</h1></body></html>", "podman run -dt --name=web1 ubi8/httpd-24", "podman run -dt --name=web2 -P ubi8/httpd-24", "podman run -dt --name=web3 -p 8888:8080 ubi8/httpd-24", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES db23e8dabc74 registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 23 seconds ago Up 23 seconds 8080/tcp, 8443/tcp web1 1824b8f0a64b registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 18 seconds ago Up 18 seconds 0.0.0.0:33127->8080/tcp, 0.0.0.0:37679->8443/tcp web2 39de784d917a registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 5 seconds ago Up 5 seconds 0.0.0.0:8888->8080/tcp, 8443/tcp web3", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web1 podman inspect --format='{{.NetworkSettings.IPAddress}}' web3", "10.88.0.2:8080 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "curl localhost:43595 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "curl 10.88.0.4:8080 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "podman run -d --net mynet --name receiver ubi8 sleep 3000", "podman run -it --rm --net mynet --name sender alpine ping receiver PING rcv01 (10.89.0.2): 56 data bytes 64 bytes from 10.89.0.2: seq=0 ttl=42 time=0.041 ms 64 bytes from 10.89.0.2: seq=1 ttl=42 time=0.125 ms 64 bytes from 10.89.0.2: seq=2 ttl=42 time=0.109 ms", "podman pod create --name=web-pod", "podman container run -d --pod web-pod --name=web-container docker.io/library/httpd", "podman ps --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 58653cf0cf09 k8s.gcr.io/pause:3.5 4 minutes ago Up 3 minutes ago 4e61a300c194-infra 4e61a300c194 web-pod b3f4255afdb3 docker.io/library/httpd:latest httpd-foreground 3 minutes ago Up 3 minutes ago web-container 4e61a300c194 web-pod", "podman container run -it --rm --pod web-pod docker.io/library/fedora curl localhost <html><body><h1>It works!</h1></body></html>", "podman pod create --name=web-pod-publish -p 80:80", "podman pod ls POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 26fe5de43ab3 publish-pod Created 5 seconds ago 7de09076d2b3 1", "podman container run -d --pod web-pod-publish --name=web-container docker.io/library/httpd", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7de09076d2b3 k8s.gcr.io/pause:3.5 About a minute ago Up 23 seconds ago 0.0.0.0:80->80/tcp 26fe5de43ab3-infra 088befb90e59 docker.io/library/httpd httpd-foreground 23 seconds ago Up 23 seconds ago 0.0.0.0:80->80/tcp web-container", "curl localhost:80 <html><body><h1>It works!</h1></body></html>", "podman network create pod-net /etc/cni/net.d/pod-net.conflist", "podman pod create --net pod-net --name web-pod", "podman run -d --pod webt-pod --name=web-container docker.io/library/httpd", "podman ps -p CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME b7d6871d018c registry.access.redhat.com/ubi8/pause:latest 9 minutes ago Up 6 minutes ago a8e7360326ba-infra a8e7360326ba web-pod 645835585e24 docker.io/library/httpd:latest httpd-foreground 6 minutes ago Up 6 minutes ago web-container a8e7360326ba web-pod", "podman ps --format=\"{{.Networks}}\" pod-net", "cat /etc/profile.d/unauthenticated_http_proxy.sh export HTTP_PROXY=http://192.168.0.1:3128 export HTTPS_PROXY=http://192.168.0.1:3128 export NO_PROXY=example.com,172.5.0.0/16", "cat /etc/profile.d/authenticated_http_proxy.sh export HTTP_PROXY=http://USERNAME:[email protected]:3128 export HTTPS_PROXY=http://USERNAME:[email protected]:3128 export NO_PROXY=example.com,172.5.0.0/16", "podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi8/ubi efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85", "podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi 10.88.0.44", "systemctl enable --now netavark-dhcp-proxy.socket Created symlink /etc/systemd/system/sockets.target.wants/netavark-dhcp-proxy.socket -> /usr/lib/systemd/system/netavark-dhcp-proxy.socket.", "cat /usr/lib/systemd/system/netavark-dhcp-proxy.socket [Unit] Description=Netavark DHCP proxy socket [Socket] ListenStream=%t/podman/nv-proxy.sock SocketMode=0660 [Install] WantedBy=sockets.target", "podman network create -d macvlan --interface-name <LAN_INTERFACE> mv1 mv1", "podman run --rm --network mv1 -d --name test alpine top 894ae3b6b1081aca2a5d90a9855568eaa533c08a174874be59569d4656f9bc45", "podman exec test ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 5a:30:72:bf:13:76 brd ff:ff:ff:ff:ff:ff inet 192.168.188.36/24 brd 192.168.188.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5830:72ff:febf:1376/64 scope link valid_lft forever preferred_lft forever", "podman container inspect test --format {{.NetworkSettings.Networks.mv1.IPAddress}} 192.168.188.36", "podman info --format \"{{.Host.NetworkBackend}}\" cni", "systemctl enable --now cni-dhcp.socket Created symlink /etc/systemd/system/sockets.target.wants/cni-dhcp.socket -> /usr/lib/systemd/system/cni-dhcp.socket.", "cat /usr/lib/systemd/system/io.podman.dhcp.socket [Unit] Description=CNI DHCP service socket Documentation=https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp PartOf=cni-dhcp.service [Socket] ListenStream=/run/cni/dhcp.sock SocketMode=0660 SocketUser=root SocketGroup=root RemoveOnStop=true [Install] WantedBy=sockets.target", "systemctl status io.podman.dhcp.socket systemctl status cni-dhcp.socket ● cni-dhcp.socket - CNI DHCP service socket Loaded: loaded (/usr/lib/systemd/system/cni-dhcp.socket; enabled; vendor preset: disabled) Active: active (listening) since Mon 2025-01-06 08:39:35 EST; 33s ago Docs: https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp Listen: /run/cni/dhcp.sock (Stream) Tasks: 0 (limit: 11125) Memory: 4.0K CGroup: /system.slice/cni-dhcp.socket", "cp /usr/share/containers/containers.conf /etc/containers/", "network_backend=\"netavark\"", "podman system reset", "reboot", "cat /etc/containers/containers.conf [network] network_backend=\"netavark\"", "cp /usr/share/containers/containers.conf /etc/containers/", "network_backend=\"cni\"", "podman system reset", "reboot", "cat /etc/containers/containers.conf [network] network_backend=\"cni\"", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 5df5c48fea87 registry.access.redhat.com/ubi8/ubi:latest /bin/bash Less than a second ago Up Less than a second ago myubi 223df6b390b4 3afdcd93de3e k8s.gcr.io/pause:3.1 Less than a second ago Up Less than a second ago 223df6b390b4-infra 223df6b390b4", "podman generate kube mypod > mypod.yaml", "cat mypod.yaml Generation of Kubernetes YAML is still under development! # Save the output of this file and use kubectl create -f to import it into Kubernetes. # Created with podman-1.6.4 apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2020-06-09T10:31:56Z\" labels: app: mypod name: mypod spec: containers: - command: - /bin/bash env: - name: PATH value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: TERM value: xterm - name: HOSTNAME - name: container value: oci image: registry.access.redhat.com/ubi8/ubi:latest name: myubi resources: {} securityContext: allowPrivilegeEscalation: true capabilities: {} privileged: false readOnlyRootFilesystem: false tty: true workingDir: / status: {}", "oc create myapp --image=me/myapp:v1 -o yaml --dry-run > myapp.yaml", "podman play kube mypod.yaml Pod: b8c5b99ba846ccff76c3ef257e5761c2d8a5ca4d7ffa3880531aec79c0dacb22 Container: 848179395ebd33dd91d14ffbde7ae273158d9695a081468f487af4e356888ece", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID b8c5b99ba846 mypod Running 19 seconds ago 2 aa4220eaf4bb", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 848179395ebd registry.access.redhat.com/ubi8/ubi:latest /bin/bash About a minute ago Up About a minute ago myubi b8c5b99ba846 aa4220eaf4bb k8s.gcr.io/pause:3.1 About a minute ago Up About a minute ago b8c5b99ba846-infra b8c5b99ba846", "oc create -f mypod.yaml", "├── mariadb-conf │ ├── Containerfile │ ├── my.cnf", "cat mariadb-conf/Containerfile FROM docker.io/library/mariadb COPY my.cnf /etc/mysql/my.cnf", "Port or socket location where to connect port = 3306 socket = /run/mysqld/mysqld.sock Import all .cnf files from the configuration directory [mariadbd] skip-host-cache skip-name-resolve bind-address = 127.0.0.1 !includedir /etc/mysql/mariadb.conf.d/ !includedir /etc/mysql/conf.d/", "cd mariadb-conf podman build -t mariadb-conf . cd .. STEP 1: FROM docker.io/library/mariadb Trying to pull docker.io/library/mariadb:latest Getting image source signatures Copying blob 7b1a6ab2e44d done Storing signatures STEP 2: COPY my.cnf /etc/mysql/my.cnf STEP 3: COMMIT mariadb-conf --> ffae584aa6e Successfully tagged localhost/mariadb-conf:latest ffae584aa6e733ee1cdf89c053337502e1089d1620ff05680b6818a96eec3c17", "podman images LIST IMAGES REPOSITORY TAG IMAGE ID CREATED SIZE localhost/mariadb-conf latest b66fa0fa0ef2 57 seconds ago 416 MB", "podman pod create --name wordpresspod -p 8080:80", "podman run --detach --pod wordpresspod -e MYSQL_ROOT_PASSWORD=1234 -e MYSQL_DATABASE=mywpdb -e MYSQL_USER=mywpuser -e MYSQL_PASSWORD=1234 --name mydb localhost/mariadb-conf", "podman run --detach --pod wordpresspod -e WORDPRESS_DB_HOST=127.0.0.1 -e WORDPRESS_DB_NAME=mywpdb -e WORDPRESS_DB_USER=mywpuser -e WORDPRESS_DB_PASSWORD=1234 --name myweb docker.io/wordpress", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 9ea56f771915 k8s.gcr.io/pause:3.5 Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp 4b7f054a6f01-infra 4b7f054a6f01 wordpresspod 60e8dbbabac5 localhost/mariadb-conf:latest mariadbd Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp mydb 4b7f054a6f01 wordpresspod 045d3d506e50 docker.io/library/wordpress:latest apache2-foregroun... Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp myweb 4b7f054a6f01 wordpresspod", "curl http://localhost:8080/wp-admin/install.php <!DOCTYPE html> <html lang=\"en-US\" xml:lang=\"en-US\"> <head> </head> <body class=\"wp-core-ui\"> <p id=\"logo\">WordPress</p> <h1>Welcome</h1>", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 9ea56f771915 k8s.gcr.io/pause:3.5 Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp 4b7f054a6f01-infra 4b7f054a6f01 wordpresspod 60e8dbbabac5 localhost/mariadb-conf:latest mariadbd Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp mydb 4b7f054a6f01 wordpresspod 045d3d506e50 docker.io/library/wordpress:latest apache2-foregroun... Less than a second ago Up Less than a second ago 0.0.0.0:8080->80/tcp myweb 4b7f054a6f01 wordpresspod", "podman generate kube wordpresspod >> wordpresspod.yaml", "cat wordpresspod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2021-12-09T15:09:30Z\" labels: app: wordpresspod name: wordpresspod spec: containers: - args: value: podman - name: MYSQL_PASSWORD value: \"1234\" - name: MYSQL_MAJOR value: \"8.0\" - name: MYSQL_VERSION value: 8.0.27-1debian10 - name: MYSQL_ROOT_PASSWORD value: \"1234\" - name: MYSQL_DATABASE value: mywpdb - name: MYSQL_USER value: mywpuser image: mariadb name: mydb ports: - containerPort: 80 hostPort: 8080 protocol: TCP - args: - name: WORDPRESS_DB_NAME value: mywpdb - name: WORDPRESS_DB_PASSWORD value: \"1234\" - name: WORDPRESS_DB_HOST value: 127.0.0.1 - name: WORDPRESS_DB_USER value: mywpuser image: docker.io/library/wordpress:latest name: myweb", "podman rmi localhost/mariadb-conf podman rmi docker.io/library/wordpress podman rmi docker.io/library/mysql", "podman play kube wordpress.yaml STEP 1/2: FROM docker.io/library/mariadb STEP 2/2: COPY my.cnf /etc/mysql/my.cnf COMMIT localhost/mariadb-conf:latest --> 428832c45d0 Successfully tagged localhost/mariadb-conf:latest 428832c45d07d78bb9cb34e0296a7dc205026c2fe4d636c54912c3d6bab7f399 Trying to pull docker.io/library/wordpress:latest Getting image source signatures Copying blob 99c3c1c4d556 done Storing signatures Pod: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac Containers: 6c59ebe968467d7fdb961c74a175c88cb5257fed7fb3d375c002899ea855ae1f 29717878452ff56299531f79832723d3a620a403f4a996090ea987233df0bc3d", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME a1dbf7b5606c k8s.gcr.io/pause:3.5 3 minutes ago Up 2 minutes ago 0.0.0.0:8080->80/tcp 3e391d091d19-infra 3e391d091d19 wordpresspod 6c59ebe96846 localhost/mariadb-conf:latest mariadbd 2 minutes ago Exited (1) 2 minutes ago 0.0.0.0:8080->80/tcp wordpresspod-mydb 3e391d091d19 wordpresspod 29717878452f docker.io/library/wordpress:latest apache2-foregroun... 2 minutes ago Up 2 minutes ago 0.0.0.0:8080->80/tcp wordpresspod-myweb 3e391d091d19 wordpresspod", "curl http://localhost:8080/wp-admin/install.php <!DOCTYPE html> <html lang=\"en-US\" xml:lang=\"en-US\"> <head> </head> <body class=\"wp-core-ui\"> <p id=\"logo\">WordPress</p> <h1>Welcome</h1>", "podman play kube --down wordpresspod.yaml Pods stopped: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac Pods removed: 3e391d091d190756e655219a34de55583eed3ef59470aadd214c1fc48cae92ac", "podman ps --pod -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME", "cat USDHOME/.config/containers/systemd/mysleep.container [Unit] Description=The sleep container After=local-fs.target [Container] Image=registry.access.redhat.com/ubi8-minimal:latest Exec=sleep 1000 Start by default on boot WantedBy=multi-user.target default.target", "systemctl --user daemon-reload", "systemctl --user status mysleep.service ○ mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: inactive (dead)", "systemctl --user start mysleep.service", "systemctl --user status mysleep.service ● mysleep.service - The sleep container Loaded: loaded (/home/ username /.config/containers/systemd/mysleep.container; generated) Active: active (running) since Thu 2023-02-09 18:07:23 EST; 2s ago Main PID: 265651 (conmon) Tasks: 3 (limit: 76815) Memory: 1.6M CPU: 94ms CGroup:", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 421c8293fc1b registry.access.redhat.com/ubi8-minimal:latest sleep 1000 30 seconds ago Up 10 seconds ago systemd-mysleep", "systemctl enable <service>", "systemctl --user enable <service>", "loginctl enable-linger <username>", "systemctl --user daemon-reload", "systemctl --user enable container.service", "systemctl --user start container.service", "systemctl --user status container.service ● container.service - Podman container.service Loaded: loaded (/home/user/.config/systemd/user/container.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:56:57 CEST; 8s ago Docs: man:podman-generate-systemd(1) Process: 80602 ExecStart=/usr/bin/podman run --conmon-pidfile //run/user/1000/container.service-pid --cidfile //run/user/1000/container.service-cid -d ubi8-minimal:> Process: 80601 ExecStartPre=/usr/bin/rm -f //run/user/1000/container.service-pid //run/user/1000/container.service-cid (code=exited, status=0/SUCCESS) Main PID: 80617 (conmon) CGroup: /user.slice/user-1000.slice/[email protected]/container.service ├─ 2870 /usr/bin/podman ├─80612 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-> ├─80614 /usr/bin/fuse-overlayfs -o lowerdir=/home/user/.local/share/containers/storage/overlay/l/YJSPGXM2OCDZPLMLXJOW3NRF6Q:/home/user/.local/share/contain> ├─80617 /usr/bin/conmon --api-version 1 -c cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa -u cbc75d6031508dfd3d78a74a03e4ace1732b51223e72> └─cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa └─80626 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1d", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20988d59920 registry.access.redhat.com/ubi8-minimal:latest top 12 seconds ago Up 11 seconds ago funny_zhukovsky", "systemctl --user stop container.service", "podman create --name myubi registry.access.redhat.com/ubi8:latest sleep infinity 0280afe98bb75a5c5e713b28de4b7c5cb49f156f1cce4a208f13fee2f75cb453", "podman generate systemd --name myubi > ~/.config/systemd/user/container-myubi.service", "cat ~/.config/systemd/user/container-myubi.service container-myubi.service autogenerated by Podman 3.3.1 Wed Sep 8 20:34:46 CEST 2021 [Unit] Description=Podman container-myubi.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start myubi ExecStop=/usr/bin/podman stop -t 10 myubi ExecStopPost=/usr/bin/podman stop -t 10 myubi PIDFile=/run/user/1000/containers/overlay-containers/9683103f58a32192c84801f0be93446cb33c1ee7d9cdda225b78049d7c5deea4/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "podman pull registry.access.redhat.com/ubi8/httpd-24", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/httpd-24 latest 8594be0a0b57 2 weeks ago 462 MB", "podman create --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24 cdb9f981cf143021b1679599d860026b13a77187f75e46cc0eac85293710a4b1", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cdb9f981cf14 registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 5 minutes ago Created 0.0.0.0:8080->8080/tcp httpd", "podman generate systemd --new --files --name httpd /root/container-httpd.service", "cat /root/container-httpd.service container-httpd.service autogenerated by Podman 3.3.1 Wed Sep 8 20:41:44 CEST 2021 [Unit] Description=Podman container-httpd.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=%t/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStartPre=/bin/rm -f %t/%n.ctr-id ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm -d --replace --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24 ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id Type=notify NotifyAccess=all [Install] WantedBy=multi-user.target default.target", "cp -Z container-httpd.service /etc/systemd/system", "systemctl daemon-reload systemctl enable --now container-httpd.service Created symlink /etc/systemd/system/multi-user.target.wants/container-httpd.service -> /etc/systemd/system/container-httpd.service. Created symlink /etc/systemd/system/default.target.wants/container-httpd.service -> /etc/systemd/system/container-httpd.service.", "systemctl status container-httpd.service ● container-httpd.service - Podman container-httpd.service Loaded: loaded (/etc/systemd/system/container-httpd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 09:53:40 EDT; 1min 5s ago Docs: man:podman-generate-systemd(1) Process: 493317 ExecStart=/usr/bin/podman run --conmon-pidfile /run/container-httpd.pid --cidfile /run/container-httpd.ctr-id --cgroups=no-conmon -d --repla> Process: 493315 ExecStartPre=/bin/rm -f /run/container-httpd.pid /run/container-httpd.ctr-id (code=exited, status=0/SUCCESS) Main PID: 493435 (conmon)", "podman pod create --name systemd-pod 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 11d4646ba41b systemd-pod Created 40 seconds ago 1 8a428b257111 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577", "podman create --pod systemd-pod --name container0 registry.access.redhat.com/ubi 8 top podman create --pod systemd-pod --name container1 registry.access.redhat.com/ubi 8 top", "podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 24666f47d9b2 registry.access.redhat.com/ubi8:latest top 3 minutes ago Created container0 3130f724e229 systemd-pod 56eb1bf0cdfe k8s.gcr.io/pause:3.2 4 minutes ago Created 3130f724e229-infra 3130f724e229 systemd-pod 62118d170e43 registry.access.redhat.com/ubi8:latest top 3 seconds ago Created container1 3130f724e229 systemd-pod", "podman generate systemd --files --name systemd-pod /home/user1/pod-systemd-pod.service /home/user1/container-container0.service /home/user1/container-container1.service", "cat pod-systemd-pod.service pod-systemd-pod.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman pod-systemd-pod.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor= Requires=container-container0.service container-container1.service Before=container-container0.service container-container1.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start bcb128965b8e-infra ExecStop=/usr/bin/podman stop -t 10 bcb128965b8e-infra ExecStopPost=/usr/bin/podman stop -t 10 bcb128965b8e-infra PIDFile=/run/user/1000/containers/overlay-containers/1dfdcf20e35043939ea3f80f002c65c00d560e47223685dbc3230e26fe001b29/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container0.service container-container0.service autogenerated by Podman 3.3.1 Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman container-container0.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers BindsTo=pod-systemd-pod.service After=pod-systemd-pod.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start container0 ExecStop=/usr/bin/podman stop -t 10 container0 ExecStopPost=/usr/bin/podman stop -t 10 container0 PIDFile=/run/user/1000/containers/overlay-containers/4bccd7c8616ae5909b05317df4066fa90a64a067375af5996fdef9152f6d51f5/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target", "cat container-container1.service", "cp pod-systemd-pod.service container-container0.service container-container1.service USDHOME/.config/systemd/user", "systemctl enable --user pod-systemd-pod.service Created symlink /home/user1/.config/systemd/user/multi-user.target.wants/pod-systemd-pod.service -> /home/user1/.config/systemd/user/pod-systemd-pod.service. Created symlink /home/user1/.config/systemd/user/default.target.wants/pod-systemd-pod.service -> /home/user1/.config/systemd/user/pod-systemd-pod.service.", "systemctl is-enabled pod-systemd-pod.service enabled", "podman run --label \"io.containers.autoupdate=image\" --name myubi -dt registry.access.redhat.com/ubi8/ubi-init top bc219740a210455fa27deacc96d50a9e20516492f1417507c13ce1533dbdcd9d", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76465a5e2933 registry.access.redhat.com/8/ubi-init:latest top 24 seconds ago Up 23 seconds ago myubi", "podman generate systemd --new --files --name myubi /root/container-myubi.service", "cp -Z ~/container-myubi.service /usr/lib/systemd/system", "systemctl daemon-reload", "systemctl start container-myubi.service systemctl status container-myubi.service", "podman auto-update", "cat /usr/lib/systemd/system/podman-auto-update.service [Unit] Description=Podman auto-update service Documentation=man:podman-auto-update(1) Wants=network.target After=network-online.target [Service] Type=oneshot ExecStart=/usr/bin/podman auto-update [Install] WantedBy=multi-user.target default.target", "cat /usr/lib/systemd/system/podman-auto-update.timer [Unit] Description=Podman auto-update timer [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target", "systemctl enable podman-auto-update.timer", "systemctl start podman-auto-update.timer", "systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2020-12-09 00:00:00 CET 9h left n/a n/a podman-auto-update.timer podman-auto-update.service", "- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Create a web application and a database ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_firewall: - port: 8080-8081/tcp state: enabled - port: 12340/tcp state: enabled podman_selinux_ports: - ports: 8080-8081 setype: http_port_t podman_kube_specs: - state: started run_as_user: dbuser run_as_group: dbgroup kube_file_content: apiVersion: v1 kind: Pod metadata: name: db spec: containers: - name: db image: quay.io/linux-system-roles/mysql:5.6 ports: - containerPort: 1234 hostPort: 12340 volumeMounts: - mountPath: /var/lib/db:Z name: db volumes: - name: db hostPath: path: /var/lib/db - state: started run_as_user: webapp run_as_group: webapp kube_file_src: /path/to/webapp.yml", "ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "- name: Configure Podman hosts: managed-node-01.example.com tasks: - name: Start Apache server on port 8080 ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_firewall: - port: 8080/tcp state: enabled podman_kube_specs: - state: started kube_file_content: apiVersion: v1 kind: Pod metadata: name: ubi8-httpd spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html persistentVolumeClaim: claimName: ubi8-html-volume", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "cat ~/certificate.pem -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- cat ~/key.pem -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "root_password: <root_password> certificate: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- key: |- -----BEGIN PRIVATE KEY----- -----END PRIVATE KEY-----", "- name: Deploy a wordpress CMS with MySQL database hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and run the container ansible.builtin.include_role: name: rhel-system-roles.podman vars: podman_create_host_directories: true podman_activate_systemd_unit: false podman_quadlet_specs: - name: quadlet-demo type: network file_content: | [Network] Subnet=192.168.30.0/24 Gateway=192.168.30.1 Label=app=wordpress - file_src: quadlet-demo-mysql.volume - template_src: quadlet-demo-mysql.container.j2 - file_src: envoy-proxy-configmap.yml - file_src: quadlet-demo.yml - file_src: quadlet-demo.kube activate_systemd_unit: true podman_firewall: - port: 8000/tcp state: enabled - port: 9000/tcp state: enabled podman_secrets: - name: mysql-root-password-container state: present skip_existing: true data: \"{{ root_password }}\" - name: mysql-root-password-kube state: present skip_existing: true data: | apiVersion: v1 data: password: \"{{ root_password | b64encode }}\" kind: Secret metadata: name: mysql-root-password-kube - name: envoy-certificates state: present skip_existing: true data: | apiVersion: v1 data: certificate.key: {{ key | b64encode }} certificate.pem: {{ certificate | b64encode }} kind: Secret metadata: name: envoy-certificates", "ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "yum install cockpit-podman", "podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!", "podman pull registry.redhat.io/rhel8/skopeo", "podman run --rm registry.redhat.io/rhel8/skopeo skopeo inspect docker://registry.access.redhat.com/ubi8/ubi { \"Name\": \"registry.access.redhat.com/ubi8/ubi\", \"Labels\": { \"architecture\": \"x86_64\", \"name\": \"ubi8\", \"summary\": \"Provides the latest release of Red Hat Universal Base Image 8.\", \"url\": \"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8/images/8.2-347\", }, \"Architecture\": \"amd64\", \"Os\": \"linux\", \"Layers\": [ ], \"Env\": [ \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"container=oci\" ] }", "podman run --rm registry.redhat.io/rhel8/skopeo inspect --creds USDUSER:USDPASSWORD docker://USDIMAGE", "podman run --rm -v USDAUTHFILE:/auth.json registry.redhat.io/rhel8/skopeo inspect docker://USDIMAGE", "podman run --privileged --rm -v USDHOME/.local/share/containers/storage:/var/lib/containers/storage registry.redhat.io/rhel8/skopeo skopeo copy docker://registry.access.redhat.com/ubi8/ubi containers-storage:registry.access.redhat.com/ubi8/ubi", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest ecbc6f53bba0 8 weeks ago 211 MB", "podman login registry.redhat.io Username: [email protected] Password: <password> Login Succeeded!", "podman run --rm --device /dev/fuse -it registry.redhat.io/rhel8/buildah /bin/bash", "buildah from registry.access.redhat.com/ubi8 ubi8-working-container", "buildah run --isolation=chroot ubi8-working-container ls / bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8 latest ecbc6f53bba0 5 weeks ago 211 MB", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 0aaba7192762 * ecbc6f53bba0 registry.access.redhat.com/ub... ubi8-working-container", "buildah push ecbc6f53bba0 registry.example.com:5000/ubi8/ubi", "podman run --privileged --name=privileged_podman registry.access.redhat.com//podman podman run ubi8 echo hello Resolved \"ubi8\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest Storing signatures hello", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52537876caf4 registry.access.redhat.com/ubi8/podman podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago privileged_podman", "podman run --name=unprivileged_podman --security-opt label=disable --user podman --device /dev/fuse registry.access.redhat.com/ubi8/podman podman run ubi8 echo hello", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a47b26290f43 podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago unprivileged_podman", "podman login registry.redhat.io", "podman run --privileged --name podman_container -it registry.redhat.io/rhel8/podman /bin/bash", "vi Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm RUN yum -y install moon-buggy && yum clean all CMD [\"/usr/bin/moon-buggy\"]", "podman build -t moon-buggy .", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/moon-buggy latest c97c58abb564 13 seconds ago 1.67 GB registry.access.redhat.com/ubi8/ubi latest 4199acc83c6a 132seconds ago 213 MB", "podman run -it --name moon moon-buggy", "podman tag moon-buggy registry.example.com/moon-buggy", "podman push registry.example.com/moon-buggy", "yum -y install buildah", "buildah -h", "buildah from registry.access.redhat.com/ubi8/ubi Getting image source signatures Copying blob... Writing manifest to image destination Storing signatures ubi-working-container", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 272209ff0ae5 2 weeks ago 234 MB", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 01eab9588ae1 * 272209ff0ae5 registry.access.redhat.com/ub... ubi-working-container", "cat Containerfile FROM registry.access.redhat.com/ubi8/ubi ADD myecho /usr/local/bin ENTRYPOINT \"/usr/local/bin/myecho\"", "cat myecho echo \"This container works!\"", "chmod 755 myecho", "buildah bud -t myecho . STEP 1: FROM registry.access.redhat.com/ubi8/ubi STEP 2: ADD myecho /usr/local/bin STEP 3: ENTRYPOINT \"/usr/local/bin/myecho\" STEP 4: COMMIT myecho Storing signatures", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/myecho latest b28cd00741b3 About a minute ago 234 MB", "podman run --name=myecho localhost/myecho This container works!", "podman ps -a 0d97517428d localhost/myecho 12 seconds ago Exited (0) 13 seconds ago myecho", "buildah from scratch working-container", "scratchmnt=USD(buildah mount working-container) echo USDscratchmnt /var/lib/containers/storage/overlay/be2eaecf9f74b6acfe4d0017dd5534fde06b2fa8de9ed875691f6ccc791c1836/merged", "yum install -y --releasever=8 --installroot=USDscratchmnt redhat-release", "yum install -y --setopt=reposdir=/etc/yum.repos.d --installroot=USDscratchmnt --setopt=cachedir=/var/cache/dnf httpd", "mkdir -p USDscratchmnt/var/www/html echo \"Your httpd container from scratch works!\" > USDscratchmnt/var/www/html/index.html", "buildah config --cmd \"/usr/sbin/httpd -DFOREGROUND\" working-container buildah config --port 80/tcp working-container buildah commit working-container localhost/myhttpd:latest", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/myhttpd latest 08da72792f60 2 minutes ago 121 MB", "podman run -p 8080:80 -d --name myhttpd 08da72792f60", "curl localhost:8080 Your httpd container from scratch works!", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/johndoe/webserver latest dc5fcc610313 46 minutes ago 263 MB docker.io/library/mynewecho latest fa2091a7d8b6 17 hours ago 234 MB docker.io/library/myecho2 latest 4547d2c3e436 6 days ago 234 MB localhost/myecho latest b28cd00741b3 6 days ago 234 MB localhost/ubi-micro-httpd latest c6a7678c4139 12 days ago 152 MB registry.access.redhat.com/ubi8/ubi latest 272209ff0ae5 3 weeks ago 234 MB", "buildah rmi localhost/myecho", "buildah rmi docker.io/library/mynewecho docker.io/library/myecho2", "buildah rmi -a", "buildah rmi -f localhost/ubi-micro-httpd", "buildah images", "buildah run ubi-working-container cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa)", "buildah inspect localhost/myecho { \"Type\": \"buildah 0.0.1\", \"FromImage\": \"localhost/myecho:latest\", \"FromImageID\": \"b28cd00741b38c92382ee806e1653eae0a56402bcd2c8d31bdcd36521bc267a4\", \"FromImageDigest\": \"sha256:0f5b06cbd51b464fabe93ce4fe852a9038cdd7c7b7661cd7efef8f9ae8a59585\", \"Config\": \"Entrypoint\": [ \"/bin/sh\", \"-c\", \"\\\"/usr/local/bin/myecho\\\"\" ], }", "buildah from localhost/myecho", "buildah inspect ubi-working-container { \"Type\": \"buildah 0.0.1\", \"FromImage\": \"registry.access.redhat.com/ubi8/ubi:latest\", \"FromImageID\": \"272209ff0ae5fe54c119b9c32a25887e13625c9035a1599feba654aa7638262d\", \"FromImageDigest\": \"sha256:77623387101abefbf83161c7d5a0378379d0424b2244009282acb39d42f1fe13\", \"Config\": \"Container\": \"ubi-working-container\", \"ContainerID\": \"01eab9588ae1523746bb706479063ba103f6281ebaeeccb5dc42b70e450d5ad0\", \"ProcessLabel\": \"system_u:system_r:container_t:s0:c162,c1000\", \"MountLabel\": \"system_u:object_r:container_file_t:s0:c162,c1000\", }", "mycontainer=USD(buildah from localhost/myecho) echo USDmycontainer myecho-working-container", "mymount=USD(buildah mount USDmycontainer) echo USDmymount /var/lib/containers/storage/overlay/c1709df40031dda7c49e93575d9c8eebcaa5d8129033a58e5b6a95019684cc25/merged", "echo 'echo \"We modified this container.\"' >> USDmymount/usr/local/bin/myecho chmod +x USDmymount/usr/local/bin/myecho", "buildah commit USDmycontainer containers-storage:myecho2", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myecho2 latest 4547d2c3e436 4 minutes ago 234 MB localhost/myecho latest b28cd00741b3 56 minutes ago 234 MB", "podman run --name=myecho2 docker.io/library/myecho2 This container works! We even modified it.", "cat newecho echo \"I changed this container\" chmod 755 newecho", "buildah from myecho:latest myecho-working-container-2", "buildah copy myecho-working-container-2 newecho /usr/local/bin", "buildah config --entrypoint \"/bin/sh -c /usr/local/bin/newecho\" myecho-working-container-2", "buildah run myecho-working-container-2 -- sh -c '/usr/local/bin/newecho' I changed this container", "buildah commit myecho-working-container-2 containers-storage:mynewecho", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/mynewecho latest fa2091a7d8b6 8 seconds ago 234 MB", "podman run -d -p 5000:5000 registry:2", "buildah push --tls-verify=false myecho:latest localhost:5000/myecho:latest Getting image source signatures Copying blob sha256:e4efd0 Writing manifest to image destination Storing signatures", "curl http://localhost:5000/v2/_catalog {\"repositories\":[\"myecho2]} curl http://localhost:5000/v2/myecho2/tags/list {\"name\":\"myecho\",\"tags\":[\"latest\"]}", "skopeo inspect --tls-verify=false docker://localhost:5000/myecho:latest | less { \"Name\": \"localhost:5000/myecho\", \"Digest\": \"sha256:8999ff6050...\", \"RepoTags\": [ \"latest\" ], \"Created\": \"2021-06-28T14:44:05.919583964Z\", \"DockerVersion\": \"\", \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.redhat.io\", }", "podman pull --tls-verify=false localhost:5000/myecho2 podman run localhost:5000/myecho2 This container works!", "buildah push --creds username:password docker.io/library/myecho:latest docker://testaccountXX/myecho:latest", "podman run docker.io/testaccountXX/myecho:latest This container works!", "buildah from docker.io/testaccountXX/myecho:latest myecho2-working-container-2 podman run myecho-working-container-2", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 05387e29ab93 * c37e14066ac7 docker.io/library/myecho:latest myecho-working-container", "buildah rm myecho-working-container 05387e29ab93151cf52e9c85c573f3e8ab64af1592b1ff9315db8a10a77d7c22", "buildah containers", "podman run -dt --name=hc-container -p 8080:8080 --health-cmd='curl http://localhost:8080 || exit 1' --health-interval=0 registry.access.redhat.com/ubi8/httpd-24", "podman inspect --format='{{json .State.Health.Status}}' hc-container healthy", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a680c6919fe localhost/hc-container:latest /usr/bin/run-http... 2 minutes ago Up 2 minutes (healthy) hc-container", "podman healthcheck run hc-container healthy", "cat Containerfile FROM registry.access.redhat.com/ubi8/httpd-24 EXPOSE 8080 HEALTHCHECK CMD curl http://localhost:8080 || exit 1", "podman build --format=docker -t hc-container . STEP 1/3: FROM registry.access.redhat.com/ubi8/httpd-24 STEP 2/3: EXPOSE 8080 --> 5aea97430fd STEP 3/3: HEALTHCHECK CMD curl http://localhost:8080 || exit 1 COMMIT health-check Successfully tagged localhost/health-check:latest a680c6919fe6bf1a79219a1b3d6216550d5a8f83570c36d0dadfee1bb74b924e", "podman run -dt --name=hc-container localhost/hc-container", "podman inspect --format='{{json .State.Health.Status}}' hc-container healthy", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a680c6919fe localhost/hc-container:latest /usr/bin/run-http... 2 minutes ago Up 2 minutes (healthy) hc-container", "podman healthcheck run hc-container healthy", "podman system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 3 2 1.085GB 233.4MB (0%) Containers 2 0 28.17kB 28.17kB (100%) Local Volumes 3 0 0B 0B (0%)", "podman system df -v Images space usage: REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS registry.access.redhat.com/ubi8 latest b1e63aaae5cf 13 days 233.4MB 233.4MB 0B 0 registry.access.redhat.com/ubi8/httpd-24 latest 0d04740850e8 13 days 461.5MB 0B 461.5MB 1 registry.redhat.io/rhel8/podman latest dce10f591a2d 13 days 390.6MB 233.4MB 157.2MB 1 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 311180ab99fb 0d04740850e8 /usr/bin/run-httpd 0 28.17kB 16 hours exited hc1 bedb6c287ed6 dce10f591a2d podman run ubi8 echo hello 0 0B 11 hours configured dazzling_tu Local Volumes space usage: VOLUME NAME LINKS SIZE 76de0efa83a3dae1a388b9e9e67161d28187e093955df185ea228ad0b3e435d0 0 0B 8a1b4658aecc9ff38711a2c7f2da6de192c5b1e753bb7e3b25e9bf3bb7da8b13 0 0B d9cab4f6ccbcf2ac3cd750d2efff9d2b0f29411d430a119210dd242e8be20e26 0 0B", "podman system info host: arch: amd64 buildahVersion: 1.22.3 cgroupControllers: [] cgroupManager: cgroupfs cgroupVersion: v1 conmon: package: conmon-2.0.29-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.29, commit: 7d0fa63455025991c2fc641da85922fde889c91b' cpus: 2 distribution: distribution: '\"rhel\"' version: \"8.5\" eventLogger: file hostname: localhost.localdomain idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 kernel: 4.18.0-323.el8.x86_64 linkmode: dynamic memFree: 352288768 memTotal: 2819129344 ociRuntime: name: runc package: runc-1.0.2-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/runc version: |- runc version 1.0.2 spec: 1.0.2-dev go: go1.16.7 libseccomp: 2.5.1 os: linux remoteSocket: path: /run/user/1000/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: true seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.1.8-1.module+el8.5.0+12381+e822eb26.x86_64 version: |- slirp4netns version 1.1.8 commit: d361001f495417b880f20329121e3aa431a8f90f libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.1 swapFree: 3113668608 swapTotal: 3124752384 uptime: 11h 24m 12.52s (Approximately 0.46 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /home/user/.config/containers/storage.conf containerStore: number: 2 paused: 0 running: 0 stopped: 2 graphDriverName: overlay graphOptions: overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.7.1-1.module+el8.5.0+12381+e822eb26.x86_64 Version: |- fusermount3 version: 3.2.1 fuse-overlayfs: version 1.7.1 FUSE library version 3.2.1 using FUSE kernel interface version 7.26 graphRoot: /home/user/.local/share/containers/storage graphStatus: Backing Filesystem: xfs Native Overlay Diff: \"false\" Supports d_type: \"true\" Using metacopy: \"false\" imageStore: number: 3 runRoot: /run/user/1000/containers volumePath: /home/user/.local/share/containers/storage/volumes version: APIVersion: 3.3.1 Built: 1630360721 BuiltTime: Mon Aug 30 23:58:41 2021 GitCommit: \"\" GoVersion: go1.16.7 OsArch: linux/amd64 Version: 3.3.1", "podman system prune WARNING! This will remove: - all stopped containers - all stopped pods - all dangling images - all build cache Are you sure you want to continue? [y/N] y", "podman run -q --rm --name=myubi registry.access.redhat.com/ubi8/ubi:latest", "now=USD(date --iso-8601=seconds) podman events --since=now --stream=false 2023-03-08 14:27:20.696167362 +0100 CET container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi,...) 2023-03-08 14:27:20.652325082 +0100 CET image pull registry.access.redhat.com/ubi8/ubi:latest 2023-03-08 14:27:20.795695396 +0100 CET container init d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.809205161 +0100 CET container start d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.809903022 +0100 CET container attach d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.831710446 +0100 CET container died d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...) 2023-03-08 14:27:20.913786892 +0100 CET container remove d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi...)", "journalctl --user -r SYSLOG_IDENTIFIER=podman Mar 08 14:27:20 fedora podman[129324]: 2023-03-08 14:27:20.913786892 +0100 CET m=+0.066920979 container remove Mar 08 14:27:20 fedora podman[129289]: 2023-03-08 14:27:20.696167362 +0100 CET m=+0.079089208 container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72f>", "podman events --filter event=create 2023-03-08 14:27:20.696167362 +0100 CET container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72fe09 (image=registry.access.redhat.com/ubi8/ubi:latest, name=myubi,...)", "journalctl --user -r PODMAN_EVENT=create Mar 08 14:27:20 fedora podman[129289]: 2023-03-08 14:27:20.696167362 +0100 CET m=+0.079089208 container create d4748226a2bcd271b1bc4b9f88b54e8271c13ffea9b30529968291c62d72f>", "cat ~/.config/containers/containers.conf [engine] events_container_create_inspect_data=true", "podman create registry.access.redhat.com/ubi8/ubi:latest 19524fe3c145df32d4f0c9af83e7964e4fb79fc4c397c514192d9d7620a36cd3", "now=USD(date --iso-8601=seconds) podman events --since USDnow --stream=false --format \"{{.ContainerInspectData}}\" | jq \".Config.CreateCommand\" [ \"/usr/bin/podman\", \"create\", \"registry.access.redhat.com/ubi8\" ]", "journalctl --user -r PODMAN_EVENT=create --all -o json | jq \".PODMAN_CONTAINER_INSPECT_DATA | fromjson\" | jq \".Config.CreateCommand\" [ \"/usr/bin/podman\", \"create\", \"registry.access.redhat.com/ubi8\" ]", "cat counter.py #!/usr/bin/python3 import http.server counter = 0 class handler(http.server.BaseHTTPRequestHandler): def do_GET(s): global counter s.send_response(200) s.send_header('Content-type', 'text/html') s.end_headers() s.wfile.write(b'%d\\n' % counter) counter += 1 server = http.server.HTTPServer(('', 8088), handler) server.serve_forever()", "cat Containerfile FROM registry.access.redhat.com/ubi8/ubi COPY counter.py /home/counter.py RUN useradd -ms /bin/bash counter RUN yum -y install python3 && chmod 755 /home/counter.py USER counter ENTRYPOINT /home/counter.py", "podman build . --tag counter", "podman run --name criu-test --detach counter", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4f82fd84d48 localhost/counter:latest 5 seconds ago Up 4 seconds ago criu-test", "podman inspect criu-test --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.247", "curl 10.88.0.247:8088 0 curl 10.88.0.247:8088 1", "podman container checkpoint criu-test", "podman container restore --keep criu-test", "curl 10.88.0.247:8080 2 curl 10.88.0.247:8080 3 curl 10.88.0.247:8080 4", "podman container checkpoint criu-test --export /tmp/chkpt.tar.gz", "podman container restore --import /tmp/chkpt.tar.gz --name counter1 podman container restore --import /tmp/chkpt.tar.gz --name counter2 podman container restore --import /tmp/chkpt.tar.gz --name counter3", "podman ps -a --format \"{{.ID}} {{.Names}}\" a8b2e50d463c counter3 faabc5c27362 counter2 2ce648af11e5 counter1", "#\\ufe0f podman inspect counter1 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.248 #\\ufe0f podman inspect counter2 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.249 #\\ufe0f podman inspect counter3 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.250", "#\\ufe0f curl 10.88.0.248:8080 4 #\\ufe0f curl 10.88.0.249:8080 4 #\\ufe0f curl 10.88.0.250:8080 4", "podman save --output counter.tar counter", "scp counter.tar other_host :", "ssh other_host podman load --input counter.tar", "podman run --name criu-test --detach counter", "podman inspect criu-test --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.247", "curl 10.88.0.247:8080 0 curl 10.88.0.247:8080 1", "podman container checkpoint criu-test --export /tmp/chkpt.tar.gz", "scp /tmp/chkpt.tar.gz other_host :/tmp/", "podman container restore --import /tmp/chkpt.tar.gz", "*curl 10.88.0.247:8080* 2", "toolbox create <mytoolbox>", "sudo toolbox create <mytoolbox> Created container: <mytoolbox> Enter with: toolbox enter", "[user@toolbox ~]USD toolbox list IMAGE ID IMAGE NAME CREATED fe0ae375f149 registry.access.redhat.com/ubi{ProductVersion}/toolbox 5 weeks ago CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME 5245b924c2cb <mytoolbox> 7 minutes ago created registry.access.redhat.com/ubi{ProductVersion}/toolbox:8.9-6", "[user@toolbox ~]USD toolbox enter <mytoolbox>", "⬢ [user@toolbox ~]USD cat /run/.containerenv engine=\"podman-4.8.2\" name=\" <mytoolbox> \" id=\"5245b924c2cb...\" image=\"registry.access.redhat.com/ubi{ProductVersion}/toolbox\" imageid=\"fe0ae375f14919cbc0596142e3aff22a70973a36e5a165c75a86ea7ec5d8d65c\"", "⬢[user@toolbox ~]USD sudo yum install emacs gcc gdb", "⬢[user@toolbox ~]USD yum repoquery --info --installed <package_name>", "⬢[root@toolbox ~]# yum install systemd", "⬢[root@toolbox ~]# j journalctl --boot -0 Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: microcode: updated ear> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: Linux version 6.6.8-10> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: Command line: BOOT_IMA> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: x86/split lock detecti> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: BIOS-provided physical>", "⬢[root@toolbox ~]# journalctl --boot -0 --dmesg Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: microcode: updated ear> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: Linux version 6.6.8-10> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: Command line: BOOT_IMA> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: x86/split lock detecti> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: BIOS-provided physical> Jan 02 09:06:48 user-thinkpadp1gen4i.brq.csb kernel: BIOS-e820: [mem 0x0000>", "⬢[root@toolbox ~]# yum install nmap", "⬢[root@toolbox ~]# nmap -sS scanme.nmap.org Starting Nmap 7.93 ( https://nmap.org ) at 2024-01-02 10:39 CET Stats: 0:01:01 elapsed; 0 hosts completed (0 up), 256 undergoing Ping Scan Ping Scan Timing: About 29.79% done; ETC: 10:43 (0:02:24 remaining) Nmap done: 256 IP addresses (0 hosts up) scanned in 206.45 seconds", "⬢ [user@toolbox ~]USD exit", "⬢ [user@toolbox ~]USD podman stop <mytoolbox>", "⬢ [user@toolbox ~]USD toolbox rm <mytoolbox>", "yum install openmpi", ". /etc/profile.d/modules.sh", "module load mpi/openmpi-x86_64", "echo \"module load mpi/openmpi-x86_64\" >> .bashrc", "cat Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum -y install openmpi-devel wget && yum clean all RUN wget https://raw.githubusercontent.com/open-mpi/ompi/master/test/simple/ring.c && /usr/lib64/openmpi/bin/mpicc ring.c -o /home/ring && rm -f ring.c", "podman build --tag=mpi-ring .", "mpirun --mca orte_tmpdir_base /tmp/podman-mpirun podman run --env-host -v /tmp/podman-mpirun:/tmp/podman-mpirun --userns=keep-id --net=host --pid=host --ipc=host mpi-ring /home/ring Rank 2 has cleared MPI_Init Rank 2 has completed ring Rank 2 has completed MPI_Barrier Rank 3 has cleared MPI_Init Rank 3 has completed ring Rank 3 has completed MPI_Barrier Rank 1 has cleared MPI_Init Rank 1 has completed ring Rank 1 has completed MPI_Barrier Rank 0 has cleared MPI_Init Rank 0 has completed ring Rank 0 has completed MPI_Barrier", "podman pull registry.redhat.io/rhel8/rsyslog", "podman container runlabel install --display rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/install.sh", "podman container runlabel install rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/install.sh Creating directory at /host//etc/pki/rsyslog Creating directory at /host//etc/rsyslog.d Installing file at /host//etc/rsyslog.conf Installing file at /host//etc/sysconfig/rsyslog Installing file at /host//etc/logrotate.d/syslog", "podman container runlabel run --display rhel8/rsyslog command: podman run -d --privileged --name rsyslog --net=host --pid=host -v /etc/pki/rsyslog:/etc/pki/rsyslog -v /etc/rsyslog.conf:/etc/rsyslog.conf -v /etc/sysconfig/rsyslog:/etc/sysconfig/rsyslog -v /etc/rsyslog.d:/etc/rsyslog.d -v /var/log:/var/log -v /var/lib/rsyslog:/var/lib/rsyslog -v /run:/run -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog --restart=always registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh", "podman container runlabel run rhel8/rsyslog command: podman run -d --privileged --name rsyslog --net=host --pid=host -v /etc/pki/rsyslog:/etc/pki/rsyslog -v /etc/rsyslog.conf:/etc/rsyslog.conf -v /etc/sysconfig/rsyslog:/etc/sysconfig/rsyslog -v /etc/rsyslog.d:/etc/rsyslog.d -v /var/log:/var/log -v /var/lib/rsyslog:/var/lib/rsyslog -v /run:/run -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog --restart=always registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 28a0d719ff179adcea81eb63cc90fcd09f1755d5edb121399068a4ea59bd0f53", "podman container runlabel uninstall --display rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/uninstall.sh", "podman container runlabel uninstall rhel8/rsyslog command: podman run --rm --privileged -v /:/host -e HOST=/host -e IMAGE=registry.redhat.io/rhel8/rsyslog:latest -e NAME=rsyslog registry.redhat.io/rhel8/rsyslog:latest /bin/uninstall.sh", "yum install podman-remote", "systemctl enable --now podman.socket", "yum install podman-docker", "podman-remote info", "ls -al /var/run/docker.sock lrwxrwxrwx. 1 root root 23 Nov 4 10:19 /var/run/docker.sock -> /run/podman/podman.sock", "yum install podman-remote", "systemctl --user enable --now podman.socket", "export DOCKER_HOST=unix:///run/user/<uid>/podman//podman.sock", "systemctl --user status podman.socket ● podman.socket - Podman API Socket Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled) Active: active (listening) since Mon 2021-08-23 10:37:25 CEST; 9min ago Docs: man:podman-system-service(1) Listen: /run/user/1000/podman/podman.sock (Stream) CGroup: /user.slice/user-1000.slice/[email protected]/podman.socket", "podman-remote info", "yum install podman-remote", "podman system service -t 0 --log-level=debug", "podman-remote info", "curl -s --unix-socket /run/podman/podman.sock http://d/v1.0.0/libpod/info | jq { \"host\": { \"arch\": \"amd64\", \"buildahVersion\": \"1.15.0\", \"cgroupVersion\": \"v1\", \"conmon\": { \"package\": \"conmon-2.0.18-1.module+el8.3.0+7084+c16098dd.x86_64\", \"path\": \"/usr/bin/conmon\", \"version\": \"conmon version 2.0.18, commit: 7fd3f71a218f8d3a7202e464252aeb1e942d17eb\" }, ... \"version\": { \"APIVersion\": 1, \"Version\": \"2.0.0\", \"GoVersion\": \"go1.14.2\", \"GitCommit\": \"\", \"BuiltTime\": \"Thu Jan 1 01:00:00 1970\", \"Built\": 0, \"OsArch\": \"linux/amd64\" } }", "curl -XPOST --unix-socket /run/podman/podman.sock -v 'http://d/v1.0.0/images/create?fromImage=registry.access.redhat.com%2Fubi8%2Fubi' * Trying /run/podman/podman.sock * Connected to d (/run/podman/podman.sock) port 80 (#0) > POST /v1.0.0/images/create?fromImage=registry.access.redhat.com%2Fubi8%2Fubi HTTP/1.1 > Host: d > User-Agent: curl/7.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:58:37 GMT < Content-Length: 231 < {\"status\":\"pulling image () from registry.access.redhat.com/ubi8/ubi:latest, registry.redhat.io/ubi8/ubi:latest\",\"error\":\"\",\"progress\":\"\",\"progressDetail\":{},\"id\":\"ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e\"} * Connection #0 to host d left intact", "curl --unix-socket /run/podman/podman.sock -v 'http://d/v1.0.0/libpod/images/json' | jq * Trying /run/podman/podman.sock % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to d (/run/podman/podman.sock) port 80 ( 0) > GET /v1.0.0/libpod/images/json HTTP/1.1 > Host: d > User-Agent: curl/7.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:59:55 GMT < Transfer-Encoding: chunked < { [12498 bytes data] 100 12485 0 12485 0 0 2032k 0 --:--:-- --:--:-- --:--:-- 2438k * Connection #0 to host d left intact [ { \"Id\": \"ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e\", \"RepoTags\": [ \"registry.access.redhat.com/ubi8/ubi:latest\", \"registry.redhat.io/ubi8/ubi:latest\" ], \"Created\": \"2020-09-01T19:44:12.470032Z\", \"Size\": 210838671, \"Labels\": { \"architecture\": \"x86_64\", \"build-date\": \"2020-09-01T19:43:46.041620\", \"com.redhat.build-host\": \"cpt-1008.osbs.prod.upshift.rdu2.redhat.com\", ... \"maintainer\": \"Red Hat, Inc.\", \"name\": \"ubi8\", ... \"summary\": \"Provides the latest release of Red Hat Universal Base Image 8.\", \"url\": \"https://access.redhat.com/containers/ /registry.access.redhat.com/ubi8/images/8.2-347\", }, \"Names\": [ \"registry.access.redhat.com/ubi8/ubi:latest\", \"registry.redhat.io/ubi8/ubi:latest\" ], ] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index
Chapter 4. Publishing applications with .NET 9.0
Chapter 4. Publishing applications with .NET 9.0 .NET 9.0 applications can be published to use a shared system-wide version of .NET or to include .NET. The following methods exist for publishing .NET 9.0 applications: Self-contained deployment (SCD) - The application includes .NET. This method uses a runtime built by Microsoft. Framework-dependent deployment (FDD) - The application uses a shared system-wide version of .NET. Note When publishing an application for RHEL, Red Hat recommends using FDD, because it ensures that the application is using an up-to-date version of .NET, built by Red Hat, that uses a set of native dependencies. Prerequisites Existing .NET application. For more information about how to create a .NET application, see Creating an application using .NET . 4.1. Publishing .NET applications The following procedure outlines how to publish a framework-dependent application. Procedure Publish the framework-dependent application: Replace my-app with the name of the application you want to publish. Optional: If the application is for RHEL only, trim out the dependencies needed for other platforms: Replace architecture based on the platform you are using: For Intel: x64 For IBM Z and LinuxONE: s390x For 64-bit Arm: arm64 For IBM Power: ppc64le
[ "dotnet publish my-app -f net9.0", "dotnet publish my-app -f net9.0 -r rhel.8- architecture --self-contained false" ]
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_8/assembly_publishing-apps-using-dotnet_getting-started-with-dotnet-on-rhel-8
Chapter 3. Reference
Chapter 3. Reference Table 3.1. Terms and definitions Term Definition Federated identity An electronic identity linked across multiple distinct identity management systems. See the Wikipedia Federated identity reference . IdP Identity provider. See the Wikipedia Identity provider reference . SSO Single sign-on. True single sign-on allows the user to log in once and access services without re-entering authentication factors. See the Wikipedia Single_sign-on reference .
null
https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/using_company_single_sign-on_integration/ref-ciam-user-sso_company-single-sign-on
36.6.4. IBM eServer iSeries Systems
36.6.4. IBM eServer iSeries Systems The /boot/vmlinitrd- <kernel-version> file is installed when you upgrade the kernel. However, you must use the dd command to configure the system to boot the new kernel: As root, issue the command cat /proc/iSeries/mf/side to determine the default side (either A, B, or C). As root, issue the following command, where <kernel-version> is the version of the new kernel and <side> is the side from the command: Begin testing the new kernel by rebooting the computer and watching the messages to ensure that the hardware is detected properly.
[ "dd if=/boot/vmlinitrd- <kernel-version> of=/proc/iSeries/mf/ <side> /vmlinux bs=8k" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/verifying_the_boot_loader-ibm_eserver_iseries_systems
Chapter 21. Adding certification components
Chapter 21. Adding certification components After creating the new product listing, add the certification components for the newly created product listing. You can configure the following options for the newly added components: Note The component configurations differ for different product categories. Section 21.1, "Certification for Operators" Section 21.2, "Optional Qualifications for Operators" Section 21.3, "Repository Information for Operators" Section 21.4, "Component details for Operators" Section 21.5, "Contact Information for Operators" Section 21.6, "Associated products for Operators" Section 21.7, "Update Graph" To configure the options, go to the Components tab and click on any of the existing components. 21.1. Certification for Operators Verify the functionality of your CNI or CSI on Red Hat OpenShift Note This feature is applicable for CNI and CSI operators only. This feature allows you to run the certification test locally and share the test results with the Red Hat certification team. To verify the functionality of your specialized CNI or CSI operator: Click Go to Red Hat certification tool . A new certification case gets created on the Red Hat Certification portal after which you are redirected to the appropriate portal page. On the Summary tab, navigate to the Files section and click Upload, to upload your test results. Add any relevant comments in the Discussions section, and then click Add Comment . Red Hat will review the results file you submitted and verify your specialized CNI or CSI operator. Upon successful validation, your operator will get approved and published. Additional resources For detailed information, see CNI and CSI workflow. Operator Certification To run the Operator certification suite, go to Testing Options. It displays two tabs to determine how to test and certify your operator. Test locally with OpenShift Use the OpenShift cluster of your choice for testing and certification. This option allows you to integrate the provided pipeline to your own workflows for continuous verification and access to comprehensive logs for a faster feedback loop. This is the recommended approach. For more information, see Running the certification test suite locally . Test with Red Hat's hosted pipeline This approach is separate from your OpenShift software testing from certification. After you have tested your operator on the version of OpenShift you wish to certify, you can use this approach if you don't want the comprehensive logs, or are not ready to include it in your own workflows. For more information, see Running the certification suite with the Red Hat hosted pipeline . Comparing certification testing options In the long term, Red Hat recommends using the "local testing" option, also referred to as the CI Pipeline, for testing your Operator. This method allows you to incorporate the tests into your CI/CD workflows and development processes, therefore ensuring the proper functioning of your product on the OpenShift platform and streamlining future updates and recertifications for the Operator. Although initially learning about the method and debugging errors may take some time, it is an advanced method and provides the best and quickest feedback. On the other hand, Red Hat recommends using the hosted pipeline, running on the Red Hat infrastructure option for several use cases , such as when working on an urgent deadline, or when enough resources and time are not available to learn and use the tooling. You can use the hosted pipeline simultaneously with the CI/local pipeline as you learn to incorporate the local tooling long term. Most recent test run tab provides the latest test results, if any. The certification table provides the following information: Operator version Pull request Tested on Test result - Pass or Fail Created Click View all tests for detailed information about all the tests. It has two tabs: Test Results - Displays a summary of all the certification tests along with their results. Test Artifacts - Displays log files. 21.2. Optional Qualifications for Operators Note This tab is applicable only for Operator and Helm chart certifications. The Optional qualifications tab provides the option to verify if your product follows Red Hat's recommended guidelines and best practices for deploying workload on Red Hat OpenShift. When you select this tab, a functional certification is created where you will submit testing results for Red Hat's review. After successful verification your workload product gets listed as Certified with the Meets Best Practices badge on the Red Hat Ecosystem catalog. Additional resources For more information, see Best Practices . 21.3. Repository Information for Operators You can configure the registry and repository details by using the Repository information tab. Enter the required details in the following fields: Field name Description Container registry namespace Registry name set when the container was created. This field becomes non-editable when the container gets published. Outbound repository name Repository name that you have selected or the name obtained from your private registry in which your image is hosted, for example, ubi-minimal . Authorized GitHub user accounts It denotes the GitHub users who are allowed to submit operators for certification on behalf of your company. OpenShift Object YAML Use this option to add a docker config.json secret , if you are using a private container registry. Repository summary Repository summary obtained from the container image. Repository description Repository description obtained from the container image. After configuring all the mandatory fields click Save . Note All the fields marked with an asterisk * are required and must be completed before you can proceed with container certification. 21.4. Component details for Operators Configure the product component details by using this tab. Enter the required details in the following fields: Field name Description Image Type Operator bundle is selected by default. Application categories Select the respective application type of your software product. Project name Name of the project for internal purposes. After configuring all the mandatory fields click Save . 21.5. Contact Information for Operators Note Providing information for this tab is optional. In the Contact Information tab, enter the primary technical contact details of your product component. Optional: In the Technical contact email address field, enter the email address of the image maintainer. Optional: To add additional contacts for your component, click + Add new contact . Click Save . 21.6. Associated products for Operators The Associated Product tab provides the list of products that are associated with your product component along with the following information: Product Name Type Visibility - Published or Not Published Last Activity - number of days before you ran the test To add products to your component, perform the following: If you want to find a product by its name, enter the product name in the Search by name text box and click the search icon. If you are not sure of the product name, click Find a product . From the Add product dialog, select the required product from the Available products list box and click the forward arrow. The selected product is added to the Chosen products list box. Click Update attached products . Added products are listed in the Associated product list. Note All the fields marked with an asterisk * are required and must be completed before you can proceed with the certification. 21.7. Update Graph Select the OpenShift product version and Channel details for your component through this tab. Select the required version from the OpenShift Version list box. Select the required channel from the Channel list box. The Update graph table provides the following information: Version Update Paths Other Available Channels See Operator update documentation tile below the header, for more information about the upgrades. Note All the fields marked with asterisk * are required and must be completed before you can proceed with Operator bundle certification.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/proc_adding-certification-components-for-operators_openshift-sw-cert-workflow-working-with-operators
Chapter 1. Getting started overview
Chapter 1. Getting started overview Use Red Hat Streams for Apache Kafka to create and set up Kafka clusters, then connect your applications and services to those clusters. This guide describes how to install and start using Streams for Apache Kafka on OpenShift Container Platform. You can install the Streams for Apache Kafka operator directly from the OperatorHub in the OpenShift web console. The Streams for Apache Kafka operator understands how to install and manage Kafka components. Installing from the OperatorHub provides a standard configuration of Streams for Apache Kafka that allows you to take advantage of automatic updates. When the Streams for Apache Kafka operator is installed, it provides the resources to install instances of Kafka components. After installing a Kafka cluster, you can start producing and consuming messages. Note If you require more flexibility with your deployment, you can use the installation artifacts provided with Streams for Apache Kafka. For more information on using the installation artifacts, see Deploying and Managing Streams for Apache Kafka on OpenShift . 1.1. Prerequisites The following prerequisites are required for getting started with Streams for Apache Kafka. You have a Red Hat account. JDK 11 or later is installed. An OpenShift 4.12 to 4.15 cluster is available. The OpenShift oc command-line tool is installed and configured to connect to the running cluster. The steps to get started are based on using the OperatorHub in the OpenShift web console, but you'll also use the OpenShift oc CLI tool to perform certain operations. You'll need to connect to your OpenShift cluster using the oc tool. You can install the oc CLI tool from the web console by clicking the '?' help menu, then Command Line Tools . You can copy the required oc login details from the web console by clicking your profile name, then Copy login command . 1.2. Additional resources Strimzi Overview Deploying and Upgrading Streams for Apache Kafka on OpenShift
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/getting_started_with_streams_for_apache_kafka_on_openshift/getting_started_overview
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/providing-feedback-on-red-hat-documentation_rhodf
Chapter 17. Running batch operations
Chapter 17. Running batch operations Data Grid Operator provides a Batch CR that lets you create Data Grid resources in bulk. Batch CR uses the Data Grid command line interface (CLI) in batch mode to carry out sequences of operations. Note Modifying a Batch CR instance has no effect. Batch operations are "one-time" events that modify Data Grid resources. To update .spec fields for the CR, or when a batch operation fails, you must create a new instance of the Batch CR. 17.1. Running inline batch operations Include your batch operations directly in a Batch CR if they do not require separate configuration artifacts. Procedure Create a Batch CR. Specify the name of the Data Grid cluster where you want the batch operations to run as the value of the spec.cluster field. Add each CLI command to run on a line in the spec.config field. apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan config: | create cache --template=org.infinispan.DIST_SYNC mycache put --cache=mycache hello world put --cache=mycache hola mundo Apply your Batch CR. Wait for the Batch CR to succeed. 17.2. Creating ConfigMaps for batch operations Create a ConfigMap so that additional files, such as Data Grid cache configuration, are available for batch operations. Prerequisites For demonstration purposes, you should add some configuration artifacts to your host filesystem before you start the procedure: Create a /tmp/mybatch directory where you can add some files. Create a Data Grid cache configuration. Procedure Create a batch file that contains all commands you want to run. For example, the following batch file creates a cache named "mycache" and adds two entries to it: create cache mycache --file=/etc/batch/mycache.xml put --cache=mycache hello world put --cache=mycache hola mundo Important The ConfigMap is mounted in Data Grid pods at /etc/batch . You must prepend all --file= directives in your batch operations with that path. Ensure all configuration artifacts that your batch operations require are in the same directory as the batch file. Create a ConfigMap from the directory. 17.3. Running batch operations with ConfigMaps Run batch operations that include configuration artifacts. Prerequisites Create a ConfigMap that contains any files your batch operations require. Procedure Create a Batch CR that specifies the name of a Data Grid cluster as the value of the spec.cluster field. Set the name of the ConfigMap that contains your batch file and configuration artifacts with the spec.configMap field. cat > mybatch.yaml<<EOF apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan configMap: mybatch-config-map EOF Apply your Batch CR. Wait for the Batch CR to succeed. 17.4. Batch status messages Verify and troubleshoot batch operations with the status.Phase field in the Batch CR. Phase Description Succeeded All batch operations have completed successfully. Initializing Batch operations are queued and resources are initializing. Initialized Batch operations are ready to start. Running Batch operations are in progress. Failed One or more batch operations were not successful. Failed operations Batch operations are not atomic. If a command in a batch script fails, it does not affect the other operations or cause them to rollback. Note If your batch operations have any server or syntax errors, you can view log messages in the Batch CR in the status.Reason field. 17.5. Example batch operations Use these example batch operations as starting points for creating and modifying Data Grid resources with the Batch CR. Note You can pass configuration files to Data Grid Operator only via a ConfigMap . The ConfigMap is mounted in Data Grid pods at /etc/batch so you must prepend all --file= directives with that path. 17.5.1. Caches Create multiple caches from configuration files. echo "creating caches..." create cache sessions --file=/etc/batch/infinispan-prod-sessions.xml create cache tokens --file=/etc/batch/infinispan-prod-tokens.xml create cache people --file=/etc/batch/infinispan-prod-people.xml create cache books --file=/etc/batch/infinispan-prod-books.xml create cache authors --file=/etc/batch/infinispan-prod-authors.xml echo "list caches in the cluster" ls caches Create a template from a file and then create caches from the template. echo "creating caches..." create cache mytemplate --file=/etc/batch/mycache.xml create cache sessions --template=mytemplate create cache tokens --template=mytemplate echo "list caches in the cluster" ls caches 17.5.2. Counters Use the Batch CR to create multiple counters that can increment and decrement to record the count of objects. You can use counters to generate identifiers, act as rate limiters, or track the number of times a resource is accessed. echo "creating counters..." create counter --concurrency-level=1 --initial-value=5 --storage=PERSISTENT --type=weak mycounter1 create counter --initial-value=3 --storage=PERSISTENT --type=strong mycounter2 create counter --initial-value=13 --storage=PERSISTENT --type=strong --upper-bound=10 mycounter3 echo "list counters in the cluster" ls counters 17.5.3. Protobuf schema Register Protobuf schema to query values in caches. Protobuf schema ( .proto files) provide metadata about custom entities and controls field indexing. echo "creating schema..." schema --upload=person.proto person.proto schema --upload=book.proto book.proto schema --upload=author.proto book.proto echo "list Protobuf schema" ls schemas 17.5.4. Tasks Upload tasks that implement org.infinispan.tasks.ServerTask or scripts that are compatible with the javax.script scripting API. echo "creating tasks..." task upload --file=/etc/batch/myfirstscript.js myfirstscript task upload --file=/etc/batch/mysecondscript.js mysecondscript task upload --file=/etc/batch/mythirdscript.js mythirdscript echo "list tasks" ls tasks Additional resources Data Grid CLI Guide
[ "apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan config: | create cache --template=org.infinispan.DIST_SYNC mycache put --cache=mycache hello world put --cache=mycache hola mundo", "apply -f mybatch.yaml", "wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatch", "mkdir -p /tmp/mybatch", "cat > /tmp/mybatch/mycache.xml<<EOF <distributed-cache name=\"mycache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> </distributed-cache> EOF", "create cache mycache --file=/etc/batch/mycache.xml put --cache=mycache hello world put --cache=mycache hola mundo", "ls /tmp/mybatch batch mycache.xml", "create configmap mybatch-config-map --from-file=/tmp/mybatch", "cat > mybatch.yaml<<EOF apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan configMap: mybatch-config-map EOF", "apply -f mybatch.yaml", "wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatch", "echo \"creating caches...\" create cache sessions --file=/etc/batch/infinispan-prod-sessions.xml create cache tokens --file=/etc/batch/infinispan-prod-tokens.xml create cache people --file=/etc/batch/infinispan-prod-people.xml create cache books --file=/etc/batch/infinispan-prod-books.xml create cache authors --file=/etc/batch/infinispan-prod-authors.xml echo \"list caches in the cluster\" ls caches", "echo \"creating caches...\" create cache mytemplate --file=/etc/batch/mycache.xml create cache sessions --template=mytemplate create cache tokens --template=mytemplate echo \"list caches in the cluster\" ls caches", "echo \"creating counters...\" create counter --concurrency-level=1 --initial-value=5 --storage=PERSISTENT --type=weak mycounter1 create counter --initial-value=3 --storage=PERSISTENT --type=strong mycounter2 create counter --initial-value=13 --storage=PERSISTENT --type=strong --upper-bound=10 mycounter3 echo \"list counters in the cluster\" ls counters", "echo \"creating schema...\" schema --upload=person.proto person.proto schema --upload=book.proto book.proto schema --upload=author.proto book.proto echo \"list Protobuf schema\" ls schemas", "echo \"creating tasks...\" task upload --file=/etc/batch/myfirstscript.js myfirstscript task upload --file=/etc/batch/mysecondscript.js mysecondscript task upload --file=/etc/batch/mythirdscript.js mythirdscript echo \"list tasks\" ls tasks" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/batch-cr
Chapter 4. Using SSL to protect connections to Red Hat Quay
Chapter 4. Using SSL to protect connections to Red Hat Quay 4.1. Using SSL/TLS To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and a primary key file named ssl.cert and ssl.key . 4.2. Creating a Certificate Authority To configure Red Hat Quay with a self-signed certificate, you must first create a Certificate Authority (CA). Use the following procedure to create a Certificate Authority (CA). Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com 4.2.1. Signing the certificate Use the following procedure to sign the certificate. Procedure Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 4.3. Configuring SSL/TLS using the command line interface Use the following procedure to configure SSL/TLS using the CLI. Prerequisites You have created a certificate authority and signed the certificate. Procedure Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: cp ~/ssl.cert ~/ssl.key USDQUAY/config Change into the USDQUAY/config directory by entering the following command: USD cd USDQUAY/config Edit the config.yaml file and specify that you want Red Hat Quay to handle TLS/SSL: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command: USD cat rootCA.pem >> ssl.cert Stop the Quay container by entering the following command: USD sudo podman stop quay Restart the registry by entering the following command: 4.4. Configuring SSL/TLS using the Red Hat Quay UI Use the following procedure to configure SSL/TLS using the Red Hat Quay UI. To configure SSL/TLS using the command line interface, see "Configuring SSL/TLS using the command line interface". Prerequisites You have created a certificate authority and signed a certificate. Procedure Start the Quay container in configuration mode: In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created. Validate and download the updated configuration. Stop the Quay container and then restart the registry by entering the following command: 4.5. Testing the SSL/TLS configuration using the CLI Your SSL/TLS configuration can be tested by using the command-line interface (CLI). Use the following procedure to test your SSL/TLS configuration. Use the following procedure to test your SSL/TLS configuration using the CLI. Procedure Enter the following command to attempt to log in to the Red Hat Quay registry with SSL/TLS enabled: USD sudo podman login quay-server.example.com Example output Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority Because Podman does not trust self-signed certificates, you must use the --tls-verify=false option: USD sudo podman login --tls-verify=false quay-server.example.com Example output Login Succeeded! In a subsequent section, you will configure Podman to trust the root Certificate Authority. 4.6. Testing the SSL/TLS configuration using a browser Use the following procedure to test your SSL/TLS configuration using a browser. Procedure Navigate to your Red Hat Quay registry endpoint, for example, https://quay-server.example.com . If configured correctly, the browser warns of the potential risk: Proceed to the log in screen. The browser notifies you that the connection is not secure. For example: In the following section, you will configure Podman to trust the root Certificate Authority. 4.7. Configuring Podman to trust the Certificate Authority Podman uses two paths to locate the Certificate Authority (CA) file: /etc/containers/certs.d/ and /etc/docker/certs.d/ . Use the following procedure to configure Podman to trust the CA. Procedure Copy the root CA file to one of /etc/containers/certs.d/ or /etc/docker/certs.d/ . Use the exact path determined by the server hostname, and name the file ca.crt : USD sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt Verify that you no longer need to use the --tls-verify=false option when logging in to your Red Hat Quay registry: USD sudo podman login quay-server.example.com Example output Login Succeeded! 4.8. Configuring the system to trust the certificate authority Use the following procedure to configure your system to trust the certificate authority. Procedure Enter the following command to copy the rootCA.pem file to the consolidated system-wide trust store: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the system-wide trust store configuration: USD sudo update-ca-trust extract Optional. You can use the trust list command to ensure that the Quay server has been configured: USD trust list | grep quay label: quay-server.example.com Now, when you browse to the registry at https://quay-server.example.com , the lock icon shows that the connection is secure: To remove the rootCA.pem file from system-wide trust, delete the file and update the configuration: USD sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem USD sudo update-ca-trust extract USD trust list | grep quay More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates .
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "cp ~/ssl.cert ~/ssl.key USDQUAY/config", "cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat rootCA.pem >> ssl.cert", "sudo podman stop quay", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.12.8 config secret", "sudo podman rm -f quay sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8", "sudo podman login quay-server.example.com", "Error: error authenticating creds for \"quay-server.example.com\": error pinging docker registry quay-server.example.com: Get \"https://quay-server.example.com/v2/\": x509: certificate signed by unknown authority", "sudo podman login --tls-verify=false quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt", "sudo podman login quay-server.example.com", "Login Succeeded!", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "trust list | grep quay label: quay-server.example.com", "sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem", "sudo update-ca-trust extract", "trust list | grep quay" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/using-ssl-to-protect-quay
Chapter 20. Authenticating third-party clients through RH-SSO
Chapter 20. Authenticating third-party clients through RH-SSO To use the different remote services provided by Business Central or by KIE Server, your client, such as curl, wget, web browser, or a custom REST client, must authenticate through the RH-SSO server and have a valid token to perform the requests. To use the remote services, the authenticated user must have the following roles: rest-all for using Business Central remote services. kie-server for using the KIE Server remote services. Use the RH-SSO Admin Console to create these roles and assign them to the users that will consume the remote services. Your client can authenticate through RH-SSO using one of these options: Basic authentication, if it is supported by the client Token-based authentication 20.1. Basic authentication If you enabled basic authentication in the RH-SSO client adapter configuration for both Business Central and KIE Server, you can avoid the token grant and refresh calls and call the services as shown in the following examples: For web based remote repositories endpoint: For KIE Server: 20.2. Token-based authentication If you want a more secure option of authentication, you can consume the remote services from both Business Central and KIE Server by using a granted token provided by RH-SSO. Procedure In the RH-SSO Admin Console, click the Clients menu item and click Create to create a new client. The Add Client page opens. On the Add Client page, provide the required information to create a new client for your realm. For example: Client ID : kie-remote Client protocol : openid-connect Click Save to save your changes. Change the token settings in Realm Settings : In the RH-SSO Admin Console, click the Realm Settings menu item. Click the Tokens tab. Change the value for Access Token Lifespan to 15 minutes. This gives you enough time to get a token and invoke the service before it expires. Click Save to save your changes. After a public client for your remote clients is created, you can now obtain the token by making an HTTP request to the RH-SSO server's token endpoint using: The user in this command is a Business Central RH-SSO user. For more information, see Section 17.1, "Adding Red Hat Process Automation Manager users" . To view the token obtained from the RH-SSO server, use the following command: You can now use this token to authorize the remote calls. For example, if you want to check the internal Red Hat Process Automation Manager repositories, use the token as shown below:
[ "curl http://admin:password@localhost:8080/business-central/rest/repositories", "curl http://admin:password@localhost:8080/kie-server/services/rest/server/", "RESULT=`curl --data \"grant_type=password&client_id=kie-remote&username=admin&password=password\" http://localhost:8180/auth/realms/demo/protocol/openid-connect/token`", "TOKEN=`echo USDRESULT | sed 's/.*access_token\":\"//g' | sed 's/\".*//g'`", "curl -H \"Authorization: bearer USDTOKEN\" http://localhost:8080/business-central/rest/repositories" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/sso-third-party-proc_integrate-sso
Chapter 32. Associating secondary interfaces metrics to network attachments
Chapter 32. Associating secondary interfaces metrics to network attachments 32.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 32.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 32.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/networks-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)
[ "pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0", "(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/associating-secondary-interfaces-metrics-to-network-attachments
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.10 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migrating_from_version_3_to_4/index
9.4. The pacemaker_remote Service
9.4. The pacemaker_remote Service The pacemaker_remote service allows nodes not running corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes. Among the capabilities that the pacemaker_remote service provides are the following: The pacemaker_remote service allows you to scale beyond the Red Hat support limit of 32 nodes for RHEL 7.7. The pacemaker_remote service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources. The following terms are used to describe the pacemaker_remote service. cluster node - A node running the High Availability services ( pacemaker and corosync ). remote node - A node running pacemaker_remote to remotely integrate into the cluster without requiring corosync cluster membership. A remote node is configured as a cluster resource that uses the ocf:pacemaker:remote resource agent. guest node - A virtual guest node running the pacemaker_remote service. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node. pacemaker_remote - A service daemon capable of performing remote application management within remote nodes and guest nodes (KVM and LXC) in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker's local resource management daemon (LRMD) that is capable of managing resources remotely on a node not running corosync. LXC - A Linux Container defined by the libvirt-lxc Linux container driver. A Pacemaker cluster running the pacemaker_remote service has the following characteristics. Remote nodes and guest nodes run the pacemaker_remote service (with very little configuration required on the virtual machine side). The cluster stack ( pacemaker and corosync ), running on the cluster nodes, connects to the pacemaker_remote service on the remote nodes, allowing them to integrate into the cluster. The cluster stack ( pacemaker and corosync ), running on the cluster nodes, launches the guest nodes and immediately connects to the pacemaker_remote service on the guest nodes, allowing them to integrate into the cluster. The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations: they do not take place in quorum they do not execute fencing device actions they are not eligible to be the cluster's Designated Controller (DC) they do not themselves run the full range of pcs commands On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack. Other than these noted limitations, the remote and guest nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do. 9.4.1. Host and Guest Authentication The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running pacemaker_remote must share the same private key. By default this key must be placed at /etc/pacemaker/authkey on both cluster nodes and remote nodes. As of Red Hat Enterprise Linux 7.4, the pcs cluster node add-guest command sets up the authkey for guest nodes and the pcs cluster node add-remote command sets up the authkey for remote nodes. 9.4.2. Guest Node Resource Options When configuring a virtual machine or LXC resource to act as a guest node, you create a VirtualDomain resource, which manages the virtual machine. For descriptions of the options you can set for a VirtualDomain resource, see Table 9.3, "Resource Options for Virtual Domain Resources" . In addition to the VirtualDomain resource options, metadata options define the resource as a guest node and define the connection parameters. As of Red Hat Enterprise Linux 7.4, you should set these resource options with the pcs cluster node add-guest command. In releases earlier than 7.4, you can set these options when creating the resource. Table 9.4, "Metadata Options for Configuring KVM/LXC Resources as Remote Nodes" describes these metadata options. Table 9.4. Metadata Options for Configuring KVM/LXC Resources as Remote Nodes Field Default Description remote-node <none> The name of the guest node this resource defines. This both enables the resource as a guest node and defines the unique name used to identify the guest node. WARNING : This value cannot overlap with any resource or node IDs. remote-port 3121 Configures a custom port to use for the guest connection to pacemaker_remote remote-addr remote-node value used as host name The IP address or host name to connect to if remote node's name is not the host name of the guest remote-connect-timeout 60s Amount of time before a pending guest connection will time out 9.4.3. Remote Node Resource Options A remote node is defined as a cluster resource with ocf:pacemaker:remote as the resource agent. In Red Hat Enterprise Linux 7.4, you should create this resource with the pcs cluster node add-remote command. In releases earlier than 7.4, you can create this resource with the pcs resource create command. Table 9.5, "Resource Options for Remote Nodes" describes the resource options you can configure for a remote resource. Table 9.5. Resource Options for Remote Nodes Field Default Description reconnect_interval 0 Time in seconds to wait before attempting to reconnect to a remote node after an active connection to the remote node has been severed. This wait is recurring. If reconnect fails after the wait period, a new reconnect attempt will be made after observing the wait time. When this option is in use, Pacemaker will keep attempting to reach out and connect to the remote node indefinitely after each wait interval. server Server location to connect to. This can be an IP address or host name. port TCP port to connect to. 9.4.4. Changing Default Port Location If you need to change the default port location for either Pacemaker or pacemaker_remote , you can set the PCMK_remote_port environment variable that affects both of these daemons. This environment variable can be enabled by placing it in the /etc/sysconfig/pacemaker file as follows. When changing the default port used by a particular guest node or remote node, the PCMK_remote_port variable must be set in that node's /etc/sysconfig/pacemaker file, and the cluster resource creating the guest node or remote node connection must also be configured with the same port number (using the remote-port metadata option for guest nodes, or the port option for remote nodes). 9.4.5. Configuration Overview: KVM Guest Node This section provides a high-level summary overview of the steps to perform to have Pacemaker launch a virtual machine and to integrate that machine as a guest node, using libvirt and KVM virtual guests. Configure the VirtualDomain resources, as described in Section 9.3, "Configuring a Virtual Domain as a Resource" . On systems running Red Hat Enterprise Linux 7.3 and earlier, put the same encryption key with the path /etc/pacemaker/authkey on every cluster node and virtual machine with the following procedure. This secures remote communication and authentication. Enter the following set of commands on every node to create the authkey directory with secure permissions. The following command shows one method to create an encryption key. You should create the key only once and then copy it to all of the nodes. For Red Hat Enterprise Linux 7.4, enter the following commands on every virtual machine to install pacemaker_remote packages, start the pcsd service and enable it to run on startup, and allow TCP port 3121 through the firewall. For Red Hat Enterprise Linux 7.3 and earlier, run the following commands on every virtual machine to install pacemaker_remote packages, start the pacemaker_remote service and enable it to run on startup, and allow TCP port 3121 through the firewall. Give each virtual machine a static network address and unique host name, which should be known to all nodes. For information on setting a static IP address for the guest virtual machine, see the Virtualization Deployment and Administration Guide . For Red Hat Enterprise Linux 7.4 and later, use the following command to convert an existing VirtualDomain resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. In addition to converting the resource, this command copies the /etc/pacemaker/authkey to the guest node and starts and enables the pacemaker_remote daemon on the guest node. For Red Hat Enterprise Linux 7.3 and earlier, use the following command to convert an existing VirtualDomain resource into a guest node. This command must be run on a cluster node and not on the guest node which is being added. After creating the VirtualDomain resource, you can treat the guest node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the guest node as in the following commands, which are run from a cluster node. As of Red Hat Enterprise Linux 7.3, you can include guest nodes in groups, which allows you to group a storage device, file system, and VM. 9.4.6. Configuration Overview: Remote Node (Red Hat Enterprise Linux 7.4) This section provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment for Red Hat Enterprise Linux 7.4. On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall. Note If you are using iptables directly, or some other firewall solution besides firewalld , simply open the following ports: TCP ports 2224 and 3121. Install the pacemaker_remote daemon on the remote node. Start and enable pcsd on the remote node. If you have not already done so, authenticate pcs to the node you will be adding as a remote node. Add the remote node resource to the cluster with the following command. This command also syncs all relevant configuration files to the new node, starts the node, and configures it to start pacemaker_remote on boot. This command must be run on a cluster node and not on the remote node which is being added. After adding the remote resource to the cluster, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node. Warning Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node. 9.4.7. Configuration Overview: Remote Node (Red Hat Enterprise Linux 7.3 and earlier) This section provides a high-level summary overview of the steps to perform to configure a Pacemaker Remote node and to integrate that node into an existing Pacemaker cluster environment in a Red Hat Enterprise Linux 7.3 (and earlier) system. On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall. Note If you are using iptables directly, or some other firewall solution besides firewalld , simply open the following ports: TCP ports 2224 and 3121. Install the pacemaker_remote daemon on the remote node. All nodes (both cluster nodes and remote nodes) must have the same authentication key installed for the communication to work correctly. If you already have a key on an existing node, use that key and copy it to the remote node. Otherwise, create a new key on the remote node. Enter the following set of commands on the remote node to create a directory for the authentication key with secure permissions. The following command shows one method to create an encryption key on the remote node. Start and enable the pacemaker_remote daemon on the remote node. On the cluster node, create a location for the shared authentication key with the same path as the authentication key on the remote node and copy the key into that directory. In this example, the key is copied from the remote node where the key was created. Enter the following command from a cluster node to create a remote resource. In this case the remote node is remote1 . After creating the remote resource, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node. Warning Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node. 9.4.8. System Upgrades and pacemaker_remote As of Red Hat Enterprise Linux 7.3, if the pacemaker_remote service is stopped on an active Pacemaker Remote node, the cluster will gracefully migrate resources off the node before stopping the node. This allows you to perform software upgrades and other routine maintenance procedures without removing the node from the cluster. Once pacemaker_remote is shut down, however, the cluster will immediately try to reconnect. If pacemaker_remote is not restarted within the resource's monitor timeout, the cluster will consider the monitor operation as failed. If you wish to avoid monitor failures when the pacemaker_remote service is stopped on an active Pacemaker Remote node, you can use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote Warning For Red Hat Enterprise Linux release 7.2 and earlier, if pacemaker_remote stops on a node that is currently integrated into a cluster, the cluster will fence that node. If the stop happens automatically as part of a yum update process, the system could be left in an unusable state (particularly if the kernel is also being upgraded at the same time as pacemaker_remote ). For Red Hat Enterprise Linux release 7.2 and earlier you must use the following procedure to take the node out of the cluster before performing any system administration that might stop pacemaker_remote . Stop the node's connection resource with the pcs resource disable resourcename , which will move all services off the node. For guest nodes, this will also stop the VM, so the VM must be started outside the cluster (for example, using virsh ) to perform any maintenance. Perform the required maintenance. When ready to return the node to the cluster, re-enable the resource with the pcs resource enable .
[ "#==#==# Pacemaker Remote # Specify a custom port for Pacemaker Remote connections PCMK_remote_port=3121", "mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker", "dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1", "yum install pacemaker-remote resource-agents pcs systemctl start pcsd.service systemctl enable pcsd.service firewall-cmd --add-port 3121/tcp --permanent firewall-cmd --add-port 2224/tcp --permanent firewall-cmd --reload", "yum install pacemaker-remote resource-agents pcs systemctl start pacemaker_remote.service systemctl enable pacemaker_remote.service firewall-cmd --add-port 3121/tcp --permanent firewall-cmd --add-port 2224/tcp --permanent firewall-cmd --reload", "pcs cluster node add-guest hostname resource_id [ options ]", "pcs cluster remote-node add hostname resource_id [ options ]", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers guest1", "firewall-cmd --permanent --add-service=high-availability success firewall-cmd --reload success", "yum install -y pacemaker-remote resource-agents pcs", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs cluster auth remote1", "pcs cluster node add-remote remote1", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1", "firewall-cmd --permanent --add-service=high-availability success firewall-cmd --reload success", "yum install -y pacemaker-remote resource-agents pcs", "mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker", "dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1", "systemctl enable pacemaker_remote.service systemctl start pacemaker_remote.service", "mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkey", "pcs resource create remote1 ocf:pacemaker:remote", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/pacemaker_remote
function::kernel_string
function::kernel_string Name function::kernel_string - Retrieves string from kernel memory Synopsis Arguments addr The kernel address to retrieve the string from Description This function returns the null terminated C string from a given kernel memory address. Reports an error on string copy fault.
[ "kernel_string:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string
4.278. sabayon
4.278. sabayon 4.278.1. RHBA-2011:1273 - sabayon bug fix update Updated sabayon packages that fix two bugs are now available for Red Hat Enterprise Linux 6. Sabayon is a tool to help system administrators and users change and maintain the default behavior of the GNOME desktop. These packages contain the graphical tools which a system administrator uses to manage Sabayon profiles. Bug Fixes BZ# 674012 Previously, when a user configured a custom panel launcher in the profile, Sabayon terminated unexpectedly while getting details of the profile. With this update, a priority level has been set up for sorting in the Details view so that Sabayon no longer crashes while getting the profile details. BZ# 654567 Previously, when a user created a file or folder on the desktop that contained an apostrophe ("'") in the name, Sabayon terminated unexpectedly when saving the profile. With this update, any apostrophe characters in the name are now escaped so that Sabayon no longer crashes and properly saves the profile. All sabayon users are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/sabayon
Chapter 20. Installation and Booting
Chapter 20. Installation and Booting The installer displays the number of multipath devices, and number of multipath devices selected, incorrectly Multipath devices are configured properly, but the installer displays the number of devices and number of selected devices incorrectly. There is no known workaround at this point. (BZ#914637) The installer displays the amount of disk space within multipath devices incorrectly Multipath devices are configured properly, but the installer displays disk space and number of devices incorrectly. There is no known workaround at this point. (BZ#1014425) The device.map configuration file generated by Anaconda is sometimes incorrect Due to limitations in the kernel, the device.map configuration file that is used to map BIOS drives to operating system devices might be generated incorrectly in certain situations, particularly when installing from a USB key. As a consequence, booting sometimes fails after installation. To work around this problem, manually update the device.map file in the /boot/grub directory. After updating device.map so that it correctly maps devices on the system, Red Hat Enterprise Linux 6 will boot as expected. (BZ# 1253223 ) The ifup script incorrectly replaces manually-defined default routes If a default route is manually added to the routing table, the ifup script incorrectly replaces it, when setting up other interfaces, if the GATEWAY parameter is specified. To work around this bug, specify a non-zero metric for either the manually-added route, or when adding a route with ifup . (BZ#1090559) Upgrading Red Hat Enterprise Linux 6 on UEFI systems clears the boot loader password When upgrading Red Hat Enterprise Linux 6 on a system with UEFI firmware and a boot loader password set, the boot loader password is removed. As a consequence, modifying the boot record is possible without a password. To work around this problem, make a back up of the password settings from the /boot/efi/EFI/redhat/grub.conf configuration file before upgrading, and then restore the settings to the /boot/efi/EFI/redhat/grub.conf file in the new system. (BZ#1416653)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/known_issues_installation_and_booting
Migrating to Red Hat AMQ 7
Migrating to Red Hat AMQ 7 Red Hat AMQ 2020.Q4 For Use with Red Hat AMQ 7.8
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/migrating_to_red_hat_amq_7/index
Preface
Preface The guidelines in this document frequently refer to the following existing documentation: Upgrading from RHEL 6 to RHEL 7 If you are using SAP HANA, see the Red Hat Knowledgebase solution How do I upgrade from RHEL 6 to RHEL 7 with SAP HANA instead. Note that the upgrade path for SAP HANA is from RHEL 6.10 to RHEL 7.9. Upgrading from RHEL 7 to RHEL 8 This document also includes additional instructions specific to upgrading from RHEL 6 to RHEL 8.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_6_to_rhel_8/pr01
Chapter 4. Specifics of Individual Software Collections
Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Features section of the Red Hat Developer Toolset Release Notes . For detailed information regarding usage and changes in 9.1, see the Red Hat Developer Toolset User Guide . 4.2. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. See Section 4.3, "MongoDB 3.4" for instructions on how to use MongoDB 3.4 on Red Hat Enterprise Linux 6. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.3. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.4. Maven The rh-maven36 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven36 Collection, type the following command as root : yum install rh-maven36 To enable this collection, type the following command at a shell prompt: scl enable rh-maven36 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven36/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.5. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis SQLite3 rh-nodejs4 no no no no no no rh-nodejs6 no no no no no no rh-nodejs8 no no no no no no rh-nodejs10 no no no no no no rh-nodejs12 no no no no no no rh-perl520 yes no yes yes no no rh-perl524 yes no yes yes no no rh-perl526 yes no yes yes no no rh-perl530 yes no yes yes no yes rh-php56 yes yes yes yes no yes rh-php70 yes no yes yes no yes rh-php71 yes no yes yes no yes rh-php72 yes no yes yes no yes rh-php73 yes no yes yes no yes python27 yes yes yes yes no yes rh-python34 no yes no yes no yes rh-python35 yes yes yes yes no yes rh-python36 yes yes yes yes no yes rh-python38 yes no yes yes no yes rh-ror41 yes yes yes yes no yes rh-ror42 yes yes yes yes no yes rh-ror50 yes yes yes yes no yes rh-ruby25 yes yes yes yes no no rh-ruby26 yes yes yes yes no no rh-ruby27 yes yes yes yes no no
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-Individual_Collections
Chapter 5. Installing a three-node cluster on AWS
Chapter 5. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.17, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 5.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_aws/installing-aws-three-node
Chapter 6. Working with containerized services
Chapter 6. Working with containerized services This chapter provides some examples of commands to manage containers and how to troubleshoot your OpenStack Platform containers 6.1. Managing containerized services Red Hat OpenStack Platform (RHOSP) runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common commands you can run on a node to manage containerized services. Listing containers and images To list running containers, run the following command: To include stopped or failed containers in the command output, add the --all option to the command: To list container images, run the following command: Inspecting container properties To view the properties of a container or container images, use the podman inspect command. For example, to inspect the keystone container, run the following command: Managing containers with Systemd services versions of OpenStack Platform managed containers with Docker and its daemon. Now, the Systemd services interface manages the lifecycle of the containers. Each container is a service and you run Systemd commands to perform specific operations for each container. Note It is not recommended to use the Podman CLI to stop, start, and restart containers because Systemd applies a restart policy. Use Systemd service commands instead. To check a container status, run the systemctl status command: To stop a container, run the systemctl stop command: To start a container, run the systemctl start command: To restart a container, run the systemctl restart command: Because no daemon monitors the containers status, Systemd automatically restarts most containers in these situations: Clean exit code or signal, such as running podman stop command. Unclean exit code, such as the podman container crashing after a start. Unclean signals. Timeout if the container takes more than 1m 30s to start. For more information about Systemd services, see the systemd.service documentation . Note Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the local file system of the node in /var/lib/config-data/puppet-generated/ . For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the local file system of the node, which overwrites any the changes that were made within the container before the restart. Monitoring podman containers with Systemd timers The Systemd timers interface manages container health checks. Each container has a timer that runs a service unit that executes health check scripts. To list all OpenStack Platform containers timers, run the systemctl list-timers command and limit the output to lines containing tripleo : To check the status of a specific container timer, run the systemctl status command for the healthcheck service: To stop, start, restart, and show the status of a container timer, run the relevant systemctl command against the .timer Systemd resource. For example, to check the status of the tripleo_keystone_healthcheck.timer resource, run the following command: If the healthcheck service is disabled but the timer for that service is present and enabled, it means that the check is currently timed out, but will be run according to timer. You can also start the check manually. Note The podman ps command does not show the container health status. Checking container logs Red Hat OpenStack Platform 17.0 logs all standard output (stdout) from all containers, and standard errors (stderr) consolidated inone single file for each container in /var/log/containers/stdout . The host also applies log rotation to this directory, which prevents huge files and disk space issues. In case a container is replaced, the new container outputs to the same log file, because podman uses the container name instead of container ID. You can also check the logs for a containerized service with the podman logs command. For example, to view the logs for the keystone container, run the following command: Accessing containers To enter the shell for a containerized service, use the podman exec command to launch /bin/bash . For example, to enter the shell for the keystone container, run the following command: To enter the shell for the keystone container as the root user, run the following command: To exit the container, run the following command: 6.2. Troubleshooting containerized services If a containerized service fails during or after overcloud deployment, use the following recommendations to determine the root cause for the failure: Note Before running these commands, check that you are logged into an overcloud node and not running these commands on the undercloud. Checking the container logs Each container retains standard output from its main process. This output acts as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, use the following command: In most cases, this log provides the cause of a container's failure. Inspecting the container In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data: This provides a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command: You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option: Note The --format option uses Go syntax to create queries. Use these options in conjunction with the podman run command to recreate the container for troubleshooting purposes: Running commands in the container In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following podman command to execute commands within a running container. For example, to run a command in the keystone container: Note The -ti options run the command through an interactive pseudoterminal. Replace <COMMAND> with your desired command. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command: To access the container's shell, run podman exec using /bin/bash as the command: Exporting a container When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container's file system, run the following command: This command create the keystone.tar archive, which you can extract and explore.
[ "sudo podman ps", "sudo podman ps --all", "sudo podman images", "sudo podman inspect keystone", "sudo systemctl status tripleo_keystone ● tripleo_keystone.service - keystone container Loaded: loaded (/etc/systemd/system/tripleo_keystone.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-02-15 23:53:18 UTC; 2 days ago Main PID: 29012 (podman) CGroup: /system.slice/tripleo_keystone.service └─29012 /usr/bin/podman start -a keystone", "sudo systemctl stop tripleo_keystone", "sudo systemctl start tripleo_keystone", "sudo systemctl restart tripleo_keystone", "sudo systemctl list-timers | grep tripleo Mon 2019-02-18 20:18:30 UTC 1s left Mon 2019-02-18 20:17:26 UTC 1min 2s ago tripleo_nova_metadata_healthcheck.timer tripleo_nova_metadata_healthcheck.service Mon 2019-02-18 20:18:34 UTC 5s left Mon 2019-02-18 20:17:23 UTC 1min 5s ago tripleo_keystone_healthcheck.timer tripleo_keystone_healthcheck.service Mon 2019-02-18 20:18:35 UTC 6s left Mon 2019-02-18 20:17:13 UTC 1min 15s ago tripleo_memcached_healthcheck.timer tripleo_memcached_healthcheck.service (...)", "sudo systemctl status tripleo_keystone_healthcheck.service ● tripleo_keystone_healthcheck.service - keystone healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.service; disabled; vendor preset: disabled) Active: inactive (dead) since Mon 2019-02-18 20:22:46 UTC; 22s ago Process: 115581 ExecStart=/usr/bin/podman exec keystone /openstack/healthcheck (code=exited, status=0/SUCCESS) Main PID: 115581 (code=exited, status=0/SUCCESS) Feb 18 20:22:46 undercloud.localdomain systemd[1]: Starting keystone healthcheck Feb 18 20:22:46 undercloud.localdomain podman[115581]: {\"versions\": {\"values\": [{\"status\": \"stable\", \"updated\": \"2019-01-22T00:00:00Z\", \"...\"}]}]}} Feb 18 20:22:46 undercloud.localdomain podman[115581]: 300 192.168.24.1:35357 0.012 seconds Feb 18 20:22:46 undercloud.localdomain systemd[1]: Started keystone healthcheck.", "sudo systemctl status tripleo_keystone_healthcheck.timer ● tripleo_keystone_healthcheck.timer - keystone container healthcheck Loaded: loaded (/etc/systemd/system/tripleo_keystone_healthcheck.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2019-02-15 23:53:18 UTC; 2 days ago", "sudo podman logs keystone", "sudo podman exec -it keystone /bin/bash", "sudo podman exec --user 0 -it <NAME OR ID> /bin/bash", "exit", "sudo podman logs keystone", "sudo podman inspect keystone", "sudo podman inspect keystone | jq .[0].Mounts", "sudo podman inspect --format='{{range .Config.Env}} -e \"{{.}}\" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone", "OPTIONS=USD( sudo podman inspect --format='{{range .Config.Env}} -e \"{{.}}\" {{end}} {{range .Mounts}} -v {{.Source}}:{{.Destination}}{{if .Mode}}:{{.Mode}}{{end}}{{end}} -ti {{.Config.Image}}' keystone ) sudo podman run --rm USDOPTIONS /bin/bash", "sudo podman exec -ti keystone <COMMAND>", "sudo podman exec -ti keystone /openstack/healthcheck", "sudo podman exec -ti keystone /bin/bash", "sudo podman export keystone -o keystone.tar" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/assembly_working-with-containerized-services
Chapter 1. Build controller observability
Chapter 1. Build controller observability Builds exposes several metrics to help you monitor the performance and functioning of your build resources. The build controller metrics are exposed on the port 8383 . 1.1. Build controller metrics You can check the following build controller metrics for monitoring purposes: Table 1.1. Build controller metrics Name Type Description Labels Status build_builds_registered_total Counter The number of total registered builds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> experimental build_buildruns_completed_total Counter The number of total completed build runs. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_establish_duration_seconds Histogram The build run establish duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_completion_duration_seconds Histogram The build run completion duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_rampup_duration_seconds Histogram The build run ramp-up duration in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_taskrun_rampup_duration_seconds Histogram The build run ramp-up duration for a task run in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental build_buildrun_taskrun_pod_rampup_duration_seconds Histogram The build run ramp-up duration for a task run pod in seconds. buildstrategy=<build_buildstrategy_name> namespace=<buildrun_namespace> build=<build_name> buildrun=<buildrun_name> experimental 1.1.1. Histogram metrics To use custom buckets for the build controller, you must set the environment variable for a particular histogram metric. The following table shows the environment variables for all histogram metrics: Table 1.2. Histogram metrics Metric Environment variable Default build_buildrun_establish_duration_seconds PROMETHEUS_BR_EST_DUR_BUCKETS 0,1,2,3,5,7,10,15,20,30 build_buildrun_completion_duration_seconds PROMETHEUS_BR_COMP_DUR_BUCKETS 50,100,150,200,250,300,350,400,450,500 build_buildrun_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10 build_buildrun_taskrun_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10 build_buildrun_taskrun_pod_rampup_duration_seconds PROMETHEUS_BR_RAMPUP_DUR_BUCKETS 0,1,2,3,4,5,6,7,8,9,10
null
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.3/html/observability/build-controller-observability
Getting started
Getting started OpenShift Container Platform 4.14 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/getting_started/index
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.15, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 4.3.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 4.3.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 4.3.3. IBM Z network connectivity requirements To install on IBM Z(R) under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 4.3.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 4.3.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 4.3.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 4.3.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM(R) Documentation. 4.3.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 4.3.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 4.3.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.3.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.3.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Note The RHEL KVM host must be configured to use bridged networking in libvirt or MacVTap to connect the network to the virtual machines. The virtual machines must have access to the network, which is attached to the RHEL KVM host. Virtual Networks, for example network address translation (NAT), within KVM are not a supported configuration. Table 4.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.3.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.3.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 4.3.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 4.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 4.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 4.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.9. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.10. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 4.11. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 4.12. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 4.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 4.15. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 4.16. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 4.17. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 4.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 4.12.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 4.12.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 1 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 4.12.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vm_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --launchSecurity type="s390-pv" \ 1 --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 4.12.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vm_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vm_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 4.12.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 4.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign", "[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.", "Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key", "variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2", "virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-kvm
Getting started
Getting started OpenShift Container Platform 4.18 Getting started in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/getting_started/index
Chapter 6. Migration from previous versions of .NET
Chapter 6. Migration from versions of .NET 6.1. Migration from versions of .NET Microsoft provides instructions for migrating from most versions of .NET Core. If you are using a version of .NET that is no longer supported or want to migrate to a newer .NET version to expand functionality, see the following articles: Migrate from ASP.NET Core 8.0 to 9.0 Migrate from ASP.NET Core 7.0 to 8.0 Migrate from ASP.NET Core 6.0 to 7.0 Migrate from ASP.NET Core 5.0 to 6.0 Migrate from ASP.NET Core 3.1 to 5.0 Migrate from ASP.NET Core 3.0 to 3.1 Migrate from ASP.NET Core 2.2 to 3.0 Migrate from ASP.NET Core 2.1 to 2.2 Migrate from .NET Core 2.0 to 2.1 Migrate from ASP.NET to ASP.NET Core Migrating .NET Core projects from project.json Migrate from project.json to .csproj format Note If migrating from .NET Core 1.x to 2.0, see the first few related sections in Migrate from ASP.NET Core 1.x to 2.0 . These sections provide guidance that is appropriate for a .NET Core 1.x to 2.0 migration path. 6.2. Porting from .NET Framework Refer to the following Microsoft articles when migrating from .NET Framework: For general guidelines, see Porting to .NET Core from .NET Framework . For porting libraries, see Porting to .NET Core - Libraries . For migrating to ASP.NET Core, see Migrating to ASP.NET Core . Several technologies and APIs present in the .NET Framework are not available in .NET Core and .NET. If your application or library requires these APIs, consider finding alternatives or continue using the .NET Framework. .NET Core and .NET do not support the following technologies and APIs: Desktop applications, for example, Windows Forms and Windows Presentation Foundation (WPF) Windows Communication Foundation (WCF) servers (WCF clients are supported) .NET remoting Additionally, several .NET APIs can only be used in Microsoft Windows environments. The following list shows examples of these Windows-specific APIs: Microsoft.Win32.Registry System.AppDomains System.Security.Principal.Windows Important Several APIs that are not supported in the default version of .NET may be available from the Microsoft.Windows.Compatibility NuGet package. Be careful when using this NuGet package. Some of the APIs provided (such as Microsoft.Win32.Registry ) only work on Windows, making your application incompatible with Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_rhel_8/assembly_dotnet-migration_assembly_publishing-apps-using-dotnet
Chapter 16. Change resources on virtual machines using director Operator
Chapter 16. Change resources on virtual machines using director Operator To change the CPU, RAM, and disk resources of an OpenStackVMSet custom resource (CR), use the OpenStackControlPlane CRD. 16.1. Change the CPU or RAM of an OpenStackVMSet CR You can use the OpenStackControlPlane CRD to change the CPU or RAM of an OpenStackVMSet custom resource (CR). Procedure Change the number of Controller virtualMachineRole cores to 8: Change the Controller virtualMachineRole RAM size to 22GB: Validate the virtualMachineRole resource: From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one. Power on the virtual machine: Replace <VM> with the name of your virtual machine. 16.2. Add additional disks to an OpenStackVMSet CR You can use the OpenStackControlPlane CRD to add additional disks to a virtual machine by editing the additionalDisks property. Procedure Add or update the additionalDisks parameter in the OpenStackControlPlane object: Apply the patch: Validate the virtualMachineRole resource: From inside the virtual machine do a graceful shutdown. Shutdown each updated virtual machine one by one. Power on the virtual machine: Replace <VM> with the name of your virtual machine.
[ "oc patch -n openstack osctlplane overcloud --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/virtualMachineRoles/controller/cores\", \"value\": 8 }]'", "oc patch -n openstack osctlplane overcloud --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/virtualMachineRoles/controller/memory\", \"value\": 22 }]'", "oc get osvmset NAME CORES RAM DESIRED READY STATUS REASON controller 8 22 1 1 Provisioned All requested VirtualMachines have been provisioned", "`virtctl start <VM>` to power on the virtual machine.", "spec: virtualMachineRoles: Controller: additionalDisks: - baseImageVolumeName: openstack-base-img dedicatedIOThread: false diskSize: 10 name: \"data-disk1\" storageAccessMode: ReadWriteMany storageClass: host-nfs-storageclass storageVolumeMode: Filesystem", "oc patch -n openstack osctlplane overcloud --patch-file controller_add_data_disk1.yaml", "oc get osvmset controller -o json | jq .spec.additionalDisks [ { \"baseImageVolumeName\": \"openstack-base-img\", \"dedicatedIOThread\": false, \"diskSize\": 10, \"name\": \"data-disk1\", \"storageAccessMode\": \"ReadWriteMany\", \"storageClass\": \"host-nfs-storageclass\", \"storageVolumeMode\": \"Filesystem\" } ]", "`virtctl start <VM>` to power on the virtual machine." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_change-resources-on-virtual-machines-using-director-operator
25.4. Methods
25.4. Methods 25.4.1. Searching Events The events collection provides search queries similar to other resource collections. An additional feature when searching the events collection is the ability to search from a certain event. This queries all of events since a specified event. Querying from an event requires an additional from parameter added before the search query. This from argument references an event id code. Example 25.3. Searching from an event This displays all events with type set to 30 since id="1012" Example 25.4. Searching using a specific event severity This displays all events with severity higher than normal . Severity levels include normal , warning , error and alert . 25.4.2. Paginating Events A virtualization environment generates a large amount of events after a period of time. However, the API only displays a default number of events for one search query. To display more than the default, the API separates results into pages with the page command in a search query. The following search query tells the API to paginate results using a page value in combination with the sortby clause: sortby time asc page 1 The sortby clause defines the base element to order of the results and whether the results are ascending or descending. For search queries of events , set the base element to time and the order to ascending ( asc ) so the API displays all events from the creation of your virtualization environment. The page condition defines the page number. One page equals the default number of events to list. Pagination begins at page 1 . To view more pages, increase the page value: sortby time asc page 2 sortby time asc page 3 sortby time asc page 4 Example 25.5. Paginating events This example paginates event resources. The URL-encoded request is: Increase the page value to view the page of results. Use an additional from argument to set the starting id . 25.4.3. Adding Events The API can add custom events with a POST request to the events collection. A new event requires the description , severity , origin , and custom_id elements. Custom events can also include flood_rate , user id , and the id codes of any resources relevant to the event. host and storage_domain elements can contain the external_status element to set an external health status. Example 25.6. Adding a custom event to the event list 25.4.4. Removing Events Removal of an event from the event list requires a DELETE request. Example 25.7. Removing an event
[ "GET /ovirt-engine/api/events;from=1012?search=type%3D30 HTTP/1.1 Accept: application/xml", "HTTP/1.1 200 OK Content-Type: application/xml <events> <event id=\"1018\" href=\"/ovirt-engine/api/events/1018\"> <description>User admin logged in.</description> <code>30</code> <severity>normal</severity> <time>2011-07-11T14:03:22.485+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> </event> <event id=\"1016\" href=\"/ovirt-engine/api/events/1016\"> <description>User admin logged in.</description> <code>30</code> <severity>normal</severity> <time>2011-07-11T14:03:07.236+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> </event> <event id=\"1014\" href=\"/ovirt-engine/api/events/1014\"> <description>User admin logged in.</description> <code>30</code> <severity>normal</severity> <time>2011-07-11T14:02:16.009+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> </event> </events>", "GET /ovirt-engine/api/events?search=severity>normal HTTP/1.1 Accept: application/xml", "HTTP/1.1 200 OK Content-Type: application/xml <events> <event id=\"2823\" href=\"/ovirt-engine/api/events/2823\"> <description>Host Host-05 has time-drift of 36002 seconds while maximum configured value is 300 seconds.</description> <code>604</code> <severity>warning</severity> <time>2015-07-11T14:03:22.485+10:00</time> <host href= \"/ovirt-engine/api/hosts/44e52bb2-27d6-4d35-8038-0c4b4db89789\" id=\"44e52bb2-27d6-4d35-8038-0c4b4db89789\"/> <cluster href= \"/ovirt-engine/api/clusters/00000001-0001-0001-0001-00000000021b\" id=\"00000001-0001-0001-0001-00000000021b\"/> <origin>oVirt</origin> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> </event> </events>", "GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%201 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%202 HTTP/1.1 Accept: application/xml", "GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%202&from=30 HTTP/1.1 Accept: application/xml", "POST /ovirt-engine/api/events HTTP/1.1 Accept: application/xml Content-type: application/xml <event> <description>The heat of the host is above 30 Oc</description> <severity>warning</severity> <origin>HP Openview</origin> <custom_id>1</custom_id> <flood_rate>30</flood_rate> <host id=\"f59a29cd-587d-48a3-b72a-db537eb21957\" > <external_status> <state>warning</state> </external_status> </host> </event>", "DELETE /ovirt-engine/api/events/1705 HTTP/1.1 HTTP/1.1 204 No Content" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/methods8
Chapter 5. Clair for Red Hat Quay
Chapter 5. Clair for Red Hat Quay Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 5.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMWare Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 5.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 5.2. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 5.3. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug. 5.4. Advanced Clair configuration Use the procedures in the following sections to configure advanced Clair settings. 5.4.1. Unmanaged Clair configuration Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster. 5.4.1.1. Running a custom Clair configuration with an unmanaged Clair database Use the following procedure to set your Clair database to unmanaged. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false 5.4.1.2. Configuring a custom Clair database with an unmanaged Clair database The Red Hat Quay Operator for OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Note The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TSL certifications, see "Configuring a custom Clair database with a managed Clair configuration". Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 5.4.2. Running a custom Clair configuration with a managed Clair database In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios: When a user wants to disable specific updater resources. When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Configuring access to the Clair database in the air-gapped OpenShift cluster . Note If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to true . If you are running Red Hat Quay in an disconnected environment, you should disable all updater components. 5.4.2.1. Setting a Clair database to managed Use the following procedure to set your Clair database to managed. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true 5.4.2.2. Configuring a custom Clair database with a managed Clair configuration The Red Hat Quay Operator for OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 5.4.3. Clair in disconnected environments Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 5.4.3.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. 5.4.3.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 5.4.3.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 5.4.3.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 5.4.3.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 5.4.3.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 5.4.3.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. 5.4.3.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 5.4.3.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 5.4.3.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 5.4.3.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 5.4.3.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 5.4.4. Mapping repositories to Common Product Enumeration information Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 5.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay Operator deployments on OpenShift Container Platform and Clair deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 5.4.4.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs . 5.5. Deploying Red Hat Quay on infrastructure nodes By default, Quay related pods are placed on arbitrary worker nodes when using the Red Hat Quay Operator to deploy the registry. For more information about how to use machine sets to configure nodes to only host infrastructure components, see Creating infrastructure machine sets . If you are not using OpenShift Container Platform machine set resources to deploy infra nodes, the section in this document shows you how to manually label and taint nodes for infrastructure purposes. After you have configured your infrastructure nodes either manually or use machines sets, you can control the placement of Quay pods on these nodes using node selectors and tolerations. 5.5.1. Labeling and tainting nodes for infrastructure use Use the following procedure to label and tain nodes for infrastructure use. Enter the following command to reveal the master and worker nodes. In this example, there are three master nodes and six worker nodes. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 Enter the following commands to label the three worker nodes for infrastructure use: USD oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= USD oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra= Now, when listing the nodes in the cluster, the last three worker nodes have the infra role. For example: USD oc get nodes Example NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558 When a worker node is assigned the infra role, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. For example: USD oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule USD oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule 5.5.2. Creating a project with node selector and tolerations Use the following procedure to create a project with node selector and tolerations. Note If you have already deployed Red Hat Quay using the Operator, remove the installed Operator and any specific namespaces that you created for the deployment. Procedure Create a project resource, specifying a node selector and toleration. For example: quay-registry.yaml kind: Project apiVersion: project.openshift.io/v1 metadata: name: quay-registry annotations: openshift.io/node-selector: 'node-role.kubernetes.io/infra=' scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "node-role.kubernetes.io/infra"}] Enter the following command to create the project: USD oc apply -f quay-registry.yaml Example output project.project.openshift.io/quay-registry created Subsequent resources created in the quay-registry namespace should now be scheduled on the dedicated infrastructure nodes. 5.5.3. Installing the Red Hat Quay Operator in the namespace Use the following procedure to install the Red Hat Quay Operator in the namespace. To install the Red Hat Quay Operator in a specific namespace, you must explicitly specify the appropriate project namespace, as in the following command. In this example, we are using quay-registry . Ths results in the Operator pod landing on one of the three infrastructure nodes. For example: USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 5.5.4. Creating the Red Hat Quay registry Use the following procedure to create the Red Hat Quay registry. Enter the following command to create the Red Hat Quay registry. Then, wait for the deployment to be marked as ready . In the following example, you should see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes. USD oc get pods -n quay-registry -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-config-editor-7bf9bccc7b-dpc6d 1/1 Running 0 5m57s 10.131.0.23 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal 5.6. Enabling monitoring when the Red Hat Quay Operator is installed in a single namespace When the Red Hat Quay Operator is installed in a single namespace, the monitoring component is set to unmanaged . To configure monitoring, you must enable it for user-defined namespaces in OpenShift Container Platform. For more information, see the OpenShift Container Platform documentation for Configuring the monitoring stack and Enabling monitoring for user-defined projects . The following sections shows you how to enable monitoring for Red Hat Quay based on the OpenShift Container Platform documentation. 5.6.1. Creating a cluster monitoring config map Use the following procedure check if the cluster-monitoring-config ConfigMap object exists. Procedure Enter the following command to check whether the cluster-monitoring-config ConfigMap object exists: USD oc -n openshift-monitoring get configmap cluster-monitoring-config Example output Error from server (NotFound): configmaps "cluster-monitoring-config" not found Optional: If the ConfigMap object does not exist, create a YAML manifest. In the following example, the file is called cluster-monitoring-config.yaml . apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | Optional: If the ConfigMap object does not exist, create the ConfigMap object: USD oc apply -f cluster-monitoring-config.yaml Example output configmap/cluster-monitoring-config created Ensure that the ConfigMap object exists by running the following command: USD oc -n openshift-monitoring get configmap cluster-monitoring-config Example output NAME DATA AGE cluster-monitoring-config 1 12s 5.6.2. Creating a user-defined workload monitoring ConfigMap object Use the following procedure check if the user-workload-monitoring-config ConfigMap object exists. Procedure Enter the following command to check whether the user-workload-monitoring-config ConfigMap object exists: Example output Error from server (NotFound): configmaps "user-workload-monitoring-config" not found If the ConfigMap object does not exist, create a YAML manifest. In the following example, the file is called user-workload-monitoring-config.yaml . apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Optional: Create the ConfigMap object by entering the following command: USD oc apply -f user-workload-monitoring-config.yaml Example output configmap/user-workload-monitoring-config created 5.6.3. Enable monitoring for user-defined projects Use the following procedure to enable monitoring for user-defined projects. Procedure Enter the following command to check if monitoring for user-defined projects is running: USD oc get pods -n openshift-user-workload-monitoring Example output No resources found in openshift-user-workload-monitoring namespace. Edit the cluster-monitoring-config ConfigMap by entering the following command: Set enableUserWorkload: true in your config.yaml file to enable monitoring for user-defined projects on the cluster: apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations: Enter the following command to save the file, apply the changes, and ensure that the appropriate pods are running: Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s 5.6.4. Creating a Service object to expose Red Hat Quay metrics Use the following procedure to create a Service object to expose Red Hat Quay metrics. Procedure Create a YAML file for the Service object: Create the Service object by entering the following command: USD oc apply -f quay-service.yaml Example output service/example-registry-quay-metrics created 5.6.5. Creating a ServiceMonitor object Use the following procedure to configure OpenShift Monitoring to scrape the metrics by creating a ServiceMonitor resource. Procedure Create a YAML file for the ServiceMonitor resource: Create the ServiceMonitor resource by entering the following command: Example output servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created 5.6.6. Viewing metrics in OpenShift Container Platform You can access the metrics in the OpenShift Container Platform console under Monitoring Metrics . In the Expression field, enter quay_ to see the list of metrics available: For example, if you have added users to your registry, select the quay-users_rows metric: 5.7. Resizing Managed Storage When deploying the Red Hat Quay Operator, three distinct persistent volume claims (PVCs) are deployed: One for the PostgreSQL 13 registry. One for the Clair PostgreSQL 13 registry. One that uses NooBaa as a backend storage. Note The connection between Red Hat Quay and NooBaa is done through the S3 API and ObjectBucketClaim API in OpenShift Container Platform. Red Hat Quay leverages that API group to create a bucket in NooBaa, obtain access keys, and automatically set everything up. On the backend, or NooBaa, side, that bucket is creating inside of the backing store. As a result, NooBaa PVCs are not mounted or connected to Red Hat Quay pods. The default size for the PostgreSQL 13 and Clair PostgreSQL 13 PVCs is set to 50 GiB. You can expand storage for these PVCs on the OpenShift Container Platform console by using the following procedure. Note The following procedure shares commonality with Expanding Persistent Volume Claims on Red Hat OpenShift Data Foundation. 5.7.1. Resizing PostgreSQL 13 PVCs on Red Hat Quay Use the following procedure to resize the PostgreSQL 13 and Clair PostgreSQL 13 PVCs. Prerequisites You have cluster admin privileges on OpenShift Container Platform. Procedure Log into the OpenShift Container Platform console and select Storage Persistent Volume Claims . Select the desired PersistentVolumeClaim for either PostgreSQL 13 or Clair PostgreSQL 13, for example, example-registry-quay-postgres-13 . From the Action menu, select Expand PVC . Enter the new size of the Persistent Volume Claim and select Expand . After a few minutes, the expanded size should reflect in the PVC's Capacity field. 5.8. Customizing Default Operator Images In certain circumstances, it might be useful to override the default images used by the Red Hat Quay Operator. This can be done by setting one or more environment variables in the Red Hat Quay Operator ClusterServiceVersion . Important Using this mechanism is not supported for production Red Hat Quay environments and is strongly encouraged only for development or testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator. 5.8.1. Environment Variables The following environment variables are used in the Red Hat Quay Operator to override component images: Environment Variable Component RELATED_IMAGE_COMPONENT_QUAY base RELATED_IMAGE_COMPONENT_CLAIR clair RELATED_IMAGE_COMPONENT_POSTGRES postgres and clair databases RELATED_IMAGE_COMPONENT_REDIS redis Note Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest). 5.8.2. Applying overrides to a running Operator When the Red Hat Quay Operator is installed in a cluster through the Operator Lifecycle Manager (OLM) , the managed component container images can be easily overridden by modifying the ClusterServiceVersion object. Use the following procedure to apply overrides to a running Red Hat Quay Operator. Procedure The ClusterServiceVersion object is Operator Lifecycle Manager's representation of a running Operator in the cluster. Find the Red Hat Quay Operator's ClusterServiceVersion by using a Kubernetes UI or the kubectl / oc CLI tool. For example: USD oc get clusterserviceversions -n <your-namespace> Using the UI, oc edit , or another method, modify the Red Hat Quay ClusterServiceVersion to include the environment variables outlined above to point to the override images: JSONPath : spec.install.spec.deployments[0].spec.template.spec.containers[0].env - name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542 Note This is done at the Operator level, so every QuayRegistry will be deployed using these same overrides. 5.9. AWS S3 CloudFront Use the following procedure if you are using AWS S3 Cloudfront for your backend registry storage. Procedure Enter the following command to specify the registry key: USD oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
[ "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl", "chmod u+x ./clairctl", "oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "sudo podman cp clairv4:/usr/bin/clairctl ./clairctl", "chmod u+x ./clairctl", "mkdir /etc/clairv4/config/", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.9.10", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json", "oc get nodes", "NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583", "oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=", "oc get nodes", "NAME STATUS ROLES AGE VERSION user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583 user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583 user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba4558", "oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule", "kind: Project apiVersion: project.openshift.io/v1 metadata: name: quay-registry annotations: openshift.io/node-selector: 'node-role.kubernetes.io/infra=' scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"node-role.kubernetes.io/infra\"}]", "oc apply -f quay-registry.yaml", "project.project.openshift.io/quay-registry created", "oc get pods -n quay-registry -o wide", "NAME READY STATUS RESTARTS AGE IP NODE quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal", "oc get pods -n quay-registry -o wide", "NAME READY STATUS RESTARTS AGE IP NODE example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-config-editor-7bf9bccc7b-dpc6d 1/1 Running 0 5m57s 10.131.0.23 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "Error from server (NotFound): configmaps \"cluster-monitoring-config\" not found", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "configmap/cluster-monitoring-config created", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "NAME DATA AGE cluster-monitoring-config 1 12s", "oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config", "Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |", "oc apply -f user-workload-monitoring-config.yaml", "configmap/user-workload-monitoring-config created", "oc get pods -n openshift-user-workload-monitoring", "No resources found in openshift-user-workload-monitoring namespace.", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:", "oc get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s", "cat <<EOF > quay-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP EOF", "oc apply -f quay-service.yaml", "service/example-registry-quay-metrics created", "cat <<EOF > quay-service-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring EOF", "oc apply -f quay-service-monitor.yaml", "servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created", "oc get clusterserviceversions -n <your-namespace>", "- name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542", "oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/clair-vulnerability-scanner
Chapter 1. Overview
Chapter 1. Overview 1.1. Major changes in RHEL 9.1 Installer and image creation Following are image builder key highlights in RHEL 9.1 GA: Image builder on-premise now supports: Uploading images to GCP Customizing the /boot partition Pushing a container image directly to a registry Users can now customize their blueprints during the image creation process. For more information, see Section 4.1, "Installer and image creation" . RHEL for Edge Following are RHEL for Edge key highlights in RHEL 9.1-GA: RHEL for Edge now supports installing the services and have them running with the default configuration, by using the fdo-admin CLI utility For more information, see Section 4.2, "RHEL for Edge" . Security RHEL 9.1 introduces Keylime , a remote machine attestation tool using the trusted platform module (TPM) technology. With Keylime, you can verify and continuously monitor the integrity of remote machines. SELinux user-space packages have been upgraded to version 3.4. The most notable changes include: Improved relabeling performance through parallel relabeling Support for SHA-256 in the semodule tool New policy utilities in the libsepol-utils package Changes in the system configuration and the clevis-luks-systemd subpackage enable the Clevis encryption client to unlock also LUKS-encrypted volumes that mount late in the boot process without using the systemctl enable clevis-luks-askpass.path command during the deployment process. See New features - Security for more information. Shells and command-line tools RHEL 9.1 introduces a new package xmlstarlet . With XMLStarlet , you can parse, transform, query, validate, and edit XML files. The following command-line tools have been updated in RHEL 9.1: opencryptoki to version 3.18.0 powerpc-utils to version 1.3.10 libvpd to version 2.2.9 lsvpd to version 1.7.14 ppc64-diag to version 2.7.8 For more information, see New Features - Shells and command-line tools Infrastructure services The following infrastructure services tools have been updated in RHEL 9.1: chrony to version 4.2 unbound to version 1.16.2 frr to version 8.2.2 For more information, see New Features - Infrastructure services . Networking NetworkManager supports migrating connection profiles from the deprecated ifcfg format to keyfile format. NetworkManager now clearly indicates that WEP support is not available in RHEL 9. The MultiPath TCP (MPTCP) code in the kernel has been updated from upstream Linux 5.19. For further details, see New features - Networking . Dynamic programming languages, web and database servers Later versions of the following components are now available as new module streams: PHP 8.1 Ruby 3.1 Node.js 18 In addition, the Apache HTTP Server has been updated to version 2.4.53. See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools Updated system toolchain The following system toolchain components have been updated in RHEL 9.1: GCC 11.2.1 glibc 2.34 binutils 2.35.2 Updated performance tools and debuggers The following performance tools and debuggers have been updated in RHEL 9.1: GDB 10.2 Valgrind 3.19 SystemTap 4.7 Dyninst 12.1.0 elfutils 0.187 Updated performance monitoring tools The following performance monitoring tools have been updated in RHEL 9.1: PCP 5.3.7 Grafana 7.5.13 Updated compiler toolsets The following compiler toolsets have been updated in RHEL 9.1: GCC Toolset 12 LLVM Toolset 14.0.6 Rust Toolset 1.62 Go Toolset 1.18 For detailed changes, see Section 4.14, "Compilers and development tools" . Java implementations in RHEL 9 The RHEL 9 AppStream repository includes: The java-17-openjdk packages, which provide the OpenJDK 17 Java Runtime Environment and the OpenJDK 17 Java Software Development Kit. The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and the OpenJDK 11 Java Software Development Kit. The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment and the OpenJDK 8 Java Software Development Kit. For more information, see OpenJDK documentation . Java tools RHEL 9.1 introduces Maven 3.8 as a new module stream. See Section 4.14, "Compilers and development tools" for more information. Identity Management Identity Management (IdM) in RHEL 9.1 introduces a Technology Preview where you can delegate user authentication to external identity providers (IdPs) that support the OAuth 2 Device Authorization Grant flow. When these users authenticate with SSSD, and after they complete authentication and authorization at the external IdP, they receive RHEL IdM single sign-on capabilities with Kerberos tickets. For more information, see Technology Previews - Identity Management Red Hat Enterprise Linux system roles Notable new features in 9.1 RHEL system roles: RHEL system roles are now available also in playbooks with fact gathering disabled. The ha_cluster role now supports SBD fencing, configuration of Corosync settings, and configuration of bundle resources. The network role now configures network settings for routing rules, supports network configuration using the nmstate API , and users can create connections with IPoIB capability. The microsoft.sql.server role has new variables, such as variables to control configuring a high availability cluster, to manage firewall ports automatically, or variables to search for mssql_tls_cert and mssql_tls_private_key values on managed nodes. The logging role supports various new options, for example startmsg.regex and endmsg.regex in files inputs, or template , severity and facility options. The storage role now includes support for thinly provisioned volumes, and the role now also has less verbosity by default. The sshd role verifies the include directive for the drop-in directory, and the role can now be managed through /etc/ssh/sshd_config. The metrics role can now export postfix performance data. The postfix role now has a new option for overwriting configuration. The firewall role does not require the state parameter when configuring masquerade or icmp_block_inversion. In the firewall role, you can now add, update, or remove services using absent and present states. The role can also provide Ansible facts, and add or remove an interface to the zone using PCI device ID. The firewall role has a new option for overwriting configuration. The selinux role now includes setting of seuser and selevel parameters. 1.2. In-place upgrade In-place upgrade from RHEL 8 to RHEL 9 The supported in-place upgrade paths currently are: From RHEL 8.6 to RHEL 9.0 on the following architectures: 64-bit Intel 64-bit AMD 64-bit ARM IBM POWER 9 (little endian) IBM Z architectures, excluding z13 From RHEL 8.6 to RHEL 9.0 on systems with SAP HANA To ensure your system remains supported after upgrading to RHEL 9.0, either update to the latest RHEL 9.1 version or enable the RHEL 9.0 Extended Update Support (EUS) repositories. For instructions on performing an in-place upgrade, see Upgrading from RHEL 8 to RHEL 9 . For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 8 to RHEL 9 . Notable enhancements include: In-place upgrades on Microsoft Azure and Google Cloud Platform with Red Hat Update Infrastructure (RHUI) are now possible. The OpenSSH and OpenSSL configurations are now migrated during the in-place upgrade. In-place upgrade from RHEL 7 to RHEL 9 It is not possible to perform an in-place upgrade directly from RHEL 7 to RHEL 9. However, you can perform an in-place upgrade from RHEL 7 to RHEL 8 and then perform a second in-place upgrade to RHEL 9. For more information, see Upgrading from RHEL 7 to RHEL 8 . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Kickstart Generator Red Hat Product Certificates Red Hat CVE Checker Kernel Oops Analyzer Red Hat Code Browser VNC Configurator Red Hat OpenShift Container Platform Update Graph Red Hat Satellite Upgrade Helper JVM Options Configuration Tool Load Balancer Configuration Tool Red Hat OpenShift Data Foundation Supportability and Interoperability Checker Ansible Automation Platform Upgrade Assistant Ceph Placement Groups (PGs) per Pool Calculator Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 9, including licenses and application compatibility levels. Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. Major differences between RHEL 8 and RHEL 9 , including removed functionality, are documented in Considerations in adopting RHEL 9 . Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 are provided by the document Upgrading from RHEL 8 to RHEL 9 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.1_release_notes/overview
Chapter 10. IdM integration with Red Hat products
Chapter 10. IdM integration with Red Hat products Find documentation for other Red Hat products that integrate with IdM. You can configure these products to allow your IdM users to access their services. Ansible Automation Platform OpenShift Container Platform Red Hat OpenStack Platform Red Hat Satellite Red Hat Single Sign-On Red Hat Virtualization
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/ref_idm-integration-with-other-red-hat-products_planning-identity-management
Chapter 11. Networking
Chapter 11. Networking i40e and i40evf now fully supported The i40e and i40evf kernel drivers have been updated to versions 1.3.21-k and 1.3.13. These updated drivers, which were previously included as a Technology Preview, are now fully supported. Note that you need to apply the i40e Driver Update Program (DUP) for Red Hat Enterprise Linux 7.2 available at https://rhn.redhat.com/errata/RHEA-2016-0464.html . For more information, see the Knowledgebase article available at https://access.redhat.com/articles/1400943 . On i40e ports, an attempt to run iSCSI related commands previously led to loss of network connectivity out of i40e ports. This update fixes the bug, and the system now allows for iSCSI commands to proceed. SNMP now correctly obeys the clientaddr directive over IPv6 Previously, the clientaddr option in snmp.conf only affected outgoing messages sent over IPv4. With this release, the outgoing IPv6 messages are correctly sent from the interface specified by clientaddr . tcpdump supports -J, -j, and --time-stamp-precision options As kernel , glibc , and libpcap now provide APIs to obtain nanosecond resolutions time stamps, tcpdump has been updated to leverage this functionality. Users can now query which time stamp sources are available (-J), set a specific time stamp source (-j), and request time stamps with a specified resolution (--time-stamp-precision). TCP/IP rebase to version 3.18 TCP/IP stack has been upgraded to upstream version 3.18, which provides a number of bug fixes and enhancements over the version. Notably, this update fixes TCP fast open extension, which now works as expected when using IPv6. In addition, this update provides support for optional TCP autocorking and implements Data Center TCP (DCTCP). NetworkManager libreswan rebase to version 1.0.6 A number of bug fixes and enhancements have been incorporated from upstream, for example: * Password handling is now more robust * Connection start and stop is now more robust * Default routing is now autodetected from pushed routes * Added support for interactive password requests * Fixed erroneous import and export capability advertisement. NetworkManager now supports setting the MTU of a bonded interface Both 'nmcli' and the GUI interface now allow the setting of MTU on a bonded interface. NetworkManager now validates IPv6 Router Advertisement MTU options before applying them Malicious or misconfigured nodes could send an IPv6 MTU that would make further network communication problematic or impossible if applied. NetworkManager now gracefully handles these events and maintains IPv6 connectivity. IPv6 Privacy extensions now enabled by default To determine and set IPv6 privacy settings at device activation, NetworkManager now checks its network configuration in NetworkManager.conf by default, and falls back to /proc/sys/net/ipv6/conf/default/use_tempaddr if necessary. The control-center Network Panel now displays WiFi device capabilities Supported operating frequencies of WiFi devices are now displayed in the control-center network panel. NetworkManager now gracefully handles route conflicts when multiple interfaces point to the same gateway NetworkManager now keeps track of configured routes and avoids attempts to set conflicting routes. When a conflicting route is no longer active, it is removed. Fix for network blackout with multihomed connections NetworkManager now avoids a network blackout when activating the second device in a multihomed connection. New option to prevent NetworkManager from overriding ip route add The new 'never-default' option has been added to the connection IP configuration. This option prevents NetworkManager from setting the default route itself, allowing the administrator to set different default routes as required. Fix for legacy network.service errors when Carrier Down is detected on some hardware When a device has no carrier during boot, NetworkManager will wait for the carrier to be detected instead of causing activation to fail immediately. NetworkManager now supports Wake On Lan The nmcli utility now allows Wake on Lan to be set on a per device basis. Improved support for firewalld zones with VPN connections When a firewall zone is configured for a device-based VPN connection, the zone is now correctly configured in firewalld. Fair Queue packet scheduler now supported The Fair Queue packet scheduler, known as fq , has been added to Red Hat Enterprise Linux 7.2 and can be selected using the tc (traffic controller) utility. Added support for transmit coalescing The xmit_more extension has been implemented, improving transmit performance of virtio-net and other drivers, especially when TSO (TCP Segmentation Offload) is disabled. Improved network frame receiving performance By refactoring the code to eliminate IRQ save and restore operations in NAPI memory allocation, latency when receiving network frames has been reduced. Significantly improved performance of route lookups The IPv4 FIB (Forward Information Base) code has been updated from upstream to improve performance. Network Namespace support for Virtual Interfaces The netns id is now supported on virtual interfaces, allowing reliable tracking of linked network interfaces across network namespace boundaries. Docker and LXC containers can now read net.ipv4.ip_local_port_range Network name space support for the net.ipv4.ip_local_port_range sysctl has been added, improving container support for software that requires access to this information. Improved reporting of autoconfigured IPv6 routes by the 'ip' tool The ip tool could not get the mtu or hoplimit information from a Route Advertisement, this has been fixed. Dual-stack socket options are now correctly exported AF_INET6 sockets are only exclusive to IPv6 when IPV6_V6ONLY is set. In all other cases the socket is also IPv4 capable. This information is now properly exported and can be interrogated using iproute2. Data Center TCP Now Supported This release includes an implementation of DCTCP to improve network performance in Data Center environments. the parameter dctcp can be set either in sysctl or on a per route basis with ip route . Per Route Congestion Control To enable different congestion control algorithms on a per route basis, the congctl parameter has been added to ip route . Improved Congestion Window handling for TCP Cubic and Reno when using GRO The method to determine bandwidth and congestion window sizing has been improved, reducing the number of ACK packets required for transmission of large volumes of data. TCP Pacing is now supported The parameter SO_MAX_PACING_RATE has been added. This enables greater control of throughput rate for environment where this is a consideration. Support for both client and server TFO The TCP Fast Open feature has been added, using the RFC 7413 assigned option number. Mitigation of TCP ACK loops Handling of duplicated TCP ACKs has been improved, preventing some problems with buggy or potentially malicious middleboxes. Minimal support for secondary endpoints with nf_conntrack_proto_sctp Basic multihoming support has been added to SCTP. AF_UNIX implementation rebased The AF_UNIX (sometimes called AF_LOCAL) code has been updated to include many fixes and enhancements. In particular, sendpage and splice (also known as zerocopy) are now supported. Kernel tunneling support rebased to upstream The kernel tunneling drivers have been updated from kernel 4, bringing in many fixes and enhancements, especially for VXLAN. Added support for crossing network namespaces to GRE Both gre and ip6gre now have support for x-netns. Improved performance when running Virtual Machine Traffic over VXLAN The transmit flow hashing code has been updated, resulting in improved performance when traffic originating from a virtual machine is directed into a tunnel. Improved offloading for VLAN frames received in a VXLAN or from GRE tunnels A number of changes have been introduced to enable GRO support and improve performance under VXLAN and NVGRE tunneling. Improved performance of Open vSwitch tunneling The tx-nocache-copy device feature is now disabled by default. The default created a significant overhead for many workloads and particularly for OVS tunnels running over a VXLAN. Improved IPsec Handling IPsec has been updated to provide many fixes and some enhancements. Of particular note is that this release now provides the ability to match on outgoing interfaces. Inclusion of VTI6 support including netns capabilities Virtual Tunnel Interfaces for IPv6, including netns capabilities, have been added to the kernel. Default value of nf_conntrack_buckets increased If not specified as parameter during module loading, the default number of buckets is calculated through dividing total memory by 16384 to determine the number of buckets. The hash table will never have fewer than 32 and is limited to 16384 buckets. For systems with more than 4GB of memory however, this limit will be 65536 buckets. Improvements in memory usage for iptables on large SMP machines Previously, large iptables rulesets could use significant amounts of memory unnecessarily, this was due to storing the ruleset on a per (possible) CPU basis. The memory overhead has been reduced by changing the way rulesets are stored. Network bonding driver updated To improve maintainability, the kernel network bonding driver has been updated to bring it in line with upstream source. Kernel netlink interfaces for bonding and 802.3ad (LACP) Additional netlink interfaces for reading and setting bonding parameters on LACP devices have been added to the kernel. Improvements in performance for mactap and macvtap with VLANs Several low throughput issues involving segmentation problems have been addressed: * Communicating with e1000 devices to virtio devices over mactap. * Communicating with an external host when using VLANs in the guest. * Communicating with the KVM host over a VLAN in both the guest and host. Improved ethtool network querying The network-querying capabilities of the ethtool utility were enhanced in a Technology Preview for Red Hat Enterprise Linux 7.1 on IBM System z and are fully supported as of Red Hat Enterprise Linux 7.2. As a result, when using hardware compatible with the improved querying, ethtool provides improved monitoring options, and displays network card settings and values more accurately.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/networking
22.2. Types
22.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with rsync . Different types all you to configure flexible access: public_content_t This is a generic type used for the location of files (and the actual files) to be shared using rsync . If a special directory is created to house files to be shared with rsync , the directory and its contents need to have this label applied to them. rsync_exec_t This type is used for the /usr/bin/rsync system binary. rsync_log_t This type is used for the rsync log file, located at /var/log/rsync.log by default. To change the location of the file rsync logs to, use the --log-file=FILE option to the rsync command at run-time. rsync_var_run_t This type is used for the rsyncd lock file, located at /var/run/rsyncd.lock . This lock file is used by the rsync server to manage connection limits. rsync_data_t This type is used for files and directories which you want to use as rsync domains and isolate them from the access scope of other services. Also, the public_content_t is a general SELinux context type, which can be used when a file or a directory interacts with multiple services (for example, FTP and NFS directory as an rsync domain). rsync_etc_t This type is used for rsync-related files in the /etc directory.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-rsync-types
Server Administration Guide
Server Administration Guide Red Hat build of Keycloak 22.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/index
Chapter 7. Expanding the cluster
Chapter 7. Expanding the cluster After deploying an installer-provisioned OpenShift Container Platform cluster, you can use the following procedures to expand the number of worker nodes. Ensure that each prospective worker node meets the prerequisites. Note Expanding the cluster using RedFish Virtual Media involves meeting minimum firmware requirements. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details when expanding the cluster using RedFish Virtual Media. 7.1. Preparing the bare metal node To expand your cluster, you must provide the node with the relevant IP address. This can be done with a static configuration, or with a DHCP (Dynamic Host Configuration protocol) server. When expanding the cluster using a DHCP server, each node must have a DHCP reservation. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "Optional: Configuring host network interfaces in the install-config.yaml file" in the "Setting up the environment for an OpenShift installation" section for additional details. Preparing the bare metal node requires executing the following procedure from the provisioner node. Procedure Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc USD sudo cp oc /usr/local/bin Power off the bare metal node by using the baseboard management controller (BMC), and ensure it is off. Retrieve the user name and password of the bare metal node's baseboard management controller. Then, create base64 strings from the user name and password: USD echo -ne "root" | base64 USD echo -ne "password" | base64 Create a configuration file for the bare metal node. Depending on whether you are using a static configuration or a DHCP server, use one of the following example bmh.yaml files, replacing values in the YAML to match your environment: USD vim bmh.yaml Static configuration bmh.yaml : --- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 8 -hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21 1 To configure the network interface for a newly created node, specify the name of the secret that contains the network configuration. Follow the nmstate syntax to define the network configuration for your node. See "Optional: Configuring host network interfaces in the install-config.yaml file" for details on configuring NMState syntax. 2 10 13 16 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 3 Add the NMState YAML syntax to configure the host interfaces. 4 Optional: If you have configured the network interface with nmstate , and you want to disable an interface, set state: up with the IP addresses set to enabled: false as shown: --- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false 5 6 7 8 9 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. 11 12 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 14 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 15 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 17 To skip certificate validation, set disableCertificateVerification to true. 18 19 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 20 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 21 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. DHCP configuration bmh.yaml : --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12 1 4 7 Replace <num> for the worker number of the bare metal node in the name fields, the credentialsName field, and the preprovisioningNetworkDataName field. 2 3 Replace <base64_of_uid> and <base64_of_pwd> with the base64 string of the user name and password. 5 Replace <nic1_mac_address> with the MAC address of the bare metal node's first NIC. See the "BMC addressing" section for additional BMC configuration options. 6 Replace <protocol> with the BMC protocol, such as IPMI, RedFish, or others. Replace <bmc_url> with the URL of the bare metal node's baseboard management controller. 8 To skip certificate validation, set disableCertificateVerification to true. 9 10 Replace <bmc_username> and <bmc_password> with the string of the BMC user name and password. 11 Optional: Replace <root_device_hint> with a device path if you specify a root device hint. 12 Optional: If you have configured the network interface for the newly created node, provide the network configuration secret name in the preprovisioningNetworkDataName of the BareMetalHost CR. Note If the MAC address of an existing bare metal node matches the MAC address of a bare metal host that you are attempting to provision, then the Ironic installation will fail. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. See "Diagnosing a host duplicate MAC address" for more information. Create the bare metal node: USD oc -n openshift-machine-api create -f bmh.yaml Example output secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created Where <num> will be the worker number. Power up and inspect the bare metal node: USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. Example output NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true Note To allow the worker node to join the cluster, scale the machineset object to the number of the BareMetalHost objects. You can scale nodes either manually or automatically. To scale nodes automatically, use the metal3.io/autoscale-to-hosts annotation for machineset . Additional resources See Optional: Configuring host network interfaces in the install-config.yaml file for details on configuring the NMState syntax. See Automatically scaling machines to the number of available bare metal hosts for details on automatically scaling machines. 7.2. Replacing a bare-metal control plane node Use the following procedure to replace an installer-provisioned OpenShift Container Platform control plane node. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important Take an etcd backup before performing this procedure so that you can restore your cluster if you encounter any issues. For more information about taking an etcd backup, see the Additional resources section. Procedure Ensure that the Bare Metal Operator is available: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.16 True False False 3d15h Remove the old BareMetalHost and Machine objects: USD oc delete bmh -n openshift-machine-api <host_name> USD oc delete machine -n openshift-machine-api <machine_name> Replace <host_name> with the name of the host and <machine_name> with the name of the machine. The machine name appears under the CONSUMER field. After you remove the BareMetalHost and Machine objects, then the machine controller automatically deletes the Node object. Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF 1 4 6 Replace <num> for the control plane number of the bare metal node in the name fields and the credentialsName field. 2 Replace <base64_of_uid> with the base64 string of the user name. 3 Replace <base64_of_pwd> with the base64 string of the password. 5 Replace <protocol> with the BMC protocol, such as redfish , redfish-virtualmedia , idrac-virtualmedia , or others. Replace <bmc_ip> with the IP address of the bare metal node's baseboard management controller. For additional BMC configuration options, see "BMC addressing" in the Additional resources section. 7 Replace <NIC1_mac_address> with the MAC address of the bare metal node's first NIC. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. View available BareMetalHost objects: USD oc get bmh -n openshift-machine-api Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m There are no MachineSet objects for control plane nodes, so you must create a Machine object instead. You can copy the providerSpec from another control plane Machine object. Create a Machine object: USD cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF 1 2 3 Replace <num> for the control plane number of the bare metal node in the name , labels and annotations fields. To view the BareMetalHost objects, run the following command: USD oc get bmh -A Example output NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m After the RHCOS installation, verify that the BareMetalHost is added to the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.29.4 control-plane-2.example.com available master 141m v1.29.4 control-plane-3.example.com available master 141m v1.29.4 compute-1.example.com available worker 87m v1.29.4 compute-2.example.com available worker 87m v1.29.4 Note After replacement of the new control plane node, the etcd pod running in the new node is in crashloopback status. See "Replacing an unhealthy etcd member" in the Additional resources section for more information. Additional resources Replacing an unhealthy etcd member Backing up etcd Bare metal configuration BMC addressing 7.3. Preparing to deploy with Virtual Media on the baremetal network If the provisioning network is enabled and you want to expand the cluster using Virtual Media on the baremetal network, use the following procedure. Prerequisites There is an existing cluster with a baremetal network and a provisioning network. Procedure Edit the provisioning custom resource (CR) to enable deploying with Virtual Media on the baremetal network: oc edit provisioning apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: "2021-08-05T18:51:50Z" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: "551591" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: "" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: "" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0 1 Add virtualMediaViaExternalNetwork: true to the provisioning CR. If the image URL exists, edit the machineset to use the API VIP address. This step only applies to clusters installed in versions 4.9 or earlier. oc edit machineset apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: "2021-08-05T18:51:52Z" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: "551513" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2 1 Edit the checksum URL to use the API VIP address. 2 Edit the url URL to use the API VIP address. 7.4. Diagnosing a duplicate MAC address when provisioning a new host in the cluster If the MAC address of an existing bare-metal node in the cluster matches the MAC address of a bare-metal host you are attempting to add to the cluster, the Bare Metal Operator associates the host with the existing node. If the host enrollment, inspection, cleaning, or other Ironic steps fail, the Bare Metal Operator retries the installation continuously. A registration error is displayed for the failed bare-metal host. You can diagnose a duplicate MAC address by examining the bare-metal hosts that are running in the openshift-machine-api namespace. Prerequisites Install an OpenShift Container Platform cluster on bare metal. Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Procedure To determine whether a bare-metal host that fails provisioning has the same MAC address as an existing node, do the following: Get the bare-metal hosts running in the openshift-machine-api namespace: USD oc get bmh -n openshift-machine-api Example output NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering To see more detailed information about the status of the failing host, run the following command replacing <bare_metal_host_name> with the name of the host: USD oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml Example output ... status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error ... 7.5. Provisioning the bare metal node Provisioning the bare metal node requires executing the following procedure from the provisioner node. Procedure Ensure the STATE is available before provisioning the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. NAME STATE ONLINE ERROR AGE openshift-worker available true 34h Get a count of the number of worker nodes. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.29.4 openshift-master-2.openshift.example.com Ready master 30h v1.29.4 openshift-master-3.openshift.example.com Ready master 30h v1.29.4 openshift-worker-0.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-1.openshift.example.com Ready worker 30h v1.29.4 Get the compute machine set. USD oc get machinesets -n openshift-machine-api NAME DESIRED CURRENT READY AVAILABLE AGE ... openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m Increase the number of worker nodes by one. USD oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api Replace <num> with the new number of worker nodes. Replace <machineset> with the name of the compute machine set from the step. Check the status of the bare metal node. USD oc -n openshift-machine-api get bmh openshift-worker-<num> Where <num> is the worker node number. The STATE changes from ready to provisioning . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true The provisioning status remains until the OpenShift Container Platform cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the state will change to provisioned . NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true After provisioning completes, ensure the bare metal node is ready. USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.29.4 openshift-master-2.openshift.example.com Ready master 30h v1.29.4 openshift-master-3.openshift.example.com Ready master 30h v1.29.4 openshift-worker-0.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-1.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.29.4 You can also check the kubelet. USD ssh openshift-worker-<num> [kni@openshift-worker-<num>]USD journalctl -fu kubelet
[ "curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux-USDVERSION.tar.gz | tar zxvf - oc", "sudo cp oc /usr/local/bin", "echo -ne \"root\" | base64", "echo -ne \"password\" | base64", "vim bmh.yaml", "--- apiVersion: v1 1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 2 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 3 interfaces: 4 - name: <nic1_name> 5 type: ethernet state: up ipv4: address: - ip: <ip_address> 6 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 7 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 8 next-hop-interface: <next_hop_nic1_name> 9 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 10 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 11 password: <base64_of_pwd> 12 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 13 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 14 bmc: address: <protocol>://<bmc_url> 15 credentialsName: openshift-worker-<num>-bmc-secret 16 disableCertificateVerification: True 17 username: <bmc_username> 18 password: <bmc_password> 19 rootDeviceHints: deviceName: <root_device_hint> 20 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 21", "--- interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> 4 namespace: openshift-machine-api spec: online: True bootMACAddress: <nic1_mac_address> 5 bmc: address: <protocol>://<bmc_url> 6 credentialsName: openshift-worker-<num>-bmc-secret 7 disableCertificateVerification: True 8 username: <bmc_username> 9 password: <bmc_password> 10 rootDeviceHints: deviceName: <root_device_hint> 11 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret 12", "oc -n openshift-machine-api create -f bmh.yaml", "secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> available true", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.16 True False False 3d15h", "oc delete bmh -n openshift-machine-api <host_name> oc delete machine -n openshift-machine-api <machine_name>", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: control-plane-<num>-bmc-secret 1 namespace: openshift-machine-api data: username: <base64_of_uid> 2 password: <base64_of_pwd> 3 type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: control-plane-<num> 4 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: <protocol>://<bmc_ip> 5 credentialsName: control-plane-<num>-bmc-secret 6 bootMACAddress: <NIC1_mac_address> 7 bootMode: UEFI externallyProvisioned: false online: true EOF", "oc get bmh -n openshift-machine-api", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com available control-plane-1 true 1h10m control-plane-2.example.com externally provisioned control-plane-2 true 4h53m control-plane-3.example.com externally provisioned control-plane-3 true 4h53m compute-1.example.com provisioned compute-1-ktmmx true 4h53m compute-1.example.com provisioned compute-2-l2zmb true 4h53m", "cat <<EOF | oc apply -f - apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: metal3.io/BareMetalHost: openshift-machine-api/control-plane-<num> 1 labels: machine.openshift.io/cluster-api-cluster: control-plane-<num> 2 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: control-plane-<num> 3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed EOF", "oc get bmh -A", "NAME STATE CONSUMER ONLINE ERROR AGE control-plane-1.example.com provisioned control-plane-1 true 2h53m control-plane-2.example.com externally provisioned control-plane-2 true 5h53m control-plane-3.example.com externally provisioned control-plane-3 true 5h53m compute-1.example.com provisioned compute-1-ktmmx true 5h53m compute-2.example.com provisioned compute-2-l2zmb true 5h53m", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com available master 4m2s v1.29.4 control-plane-2.example.com available master 141m v1.29.4 control-plane-3.example.com available master 141m v1.29.4 compute-1.example.com available worker 87m v1.29.4 compute-2.example.com available worker 87m v1.29.4", "edit provisioning", "apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: creationTimestamp: \"2021-08-05T18:51:50Z\" finalizers: - provisioning.metal3.io generation: 8 name: provisioning-configuration resourceVersion: \"551591\" uid: f76e956f-24c6-4361-aa5b-feaf72c5b526 spec: provisioningDHCPRange: 172.22.0.10,172.22.0.254 provisioningIP: 172.22.0.3 provisioningInterface: enp1s0 provisioningNetwork: Managed provisioningNetworkCIDR: 172.22.0.0/24 virtualMediaViaExternalNetwork: true 1 status: generations: - group: apps hash: \"\" lastGeneration: 7 name: metal3 namespace: openshift-machine-api resource: deployments - group: apps hash: \"\" lastGeneration: 1 name: metal3-image-cache namespace: openshift-machine-api resource: daemonsets observedGeneration: 8 readyReplicas: 0", "edit machineset", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: \"2021-08-05T18:51:52Z\" generation: 11 labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: ostest-hwmdt-worker-0 namespace: openshift-machine-api resourceVersion: \"551513\" uid: fad1c6e0-b9da-4d4a-8d73-286f78788931 spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: ostest-hwmdt machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: ostest-hwmdt-worker-0 spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 1 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 2 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data status: availableReplicas: 2 fullyLabeledReplicas: 2 observedGeneration: 11 readyReplicas: 2 replicas: 2", "oc get bmh -n openshift-machine-api", "NAME STATUS PROVISIONING STATUS CONSUMER openshift-master-0 OK externally provisioned openshift-zpwpq-master-0 openshift-master-1 OK externally provisioned openshift-zpwpq-master-1 openshift-master-2 OK externally provisioned openshift-zpwpq-master-2 openshift-worker-0 OK provisioned openshift-zpwpq-worker-0-lv84n openshift-worker-1 OK provisioned openshift-zpwpq-worker-0-zd8lm openshift-worker-2 error registering", "oc get -n openshift-machine-api bmh <bare_metal_host_name> -o yaml", "status: errorCount: 12 errorMessage: MAC address b4:96:91:1d:7c:20 conflicts with existing node openshift-worker-1 errorType: registration error", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE ONLINE ERROR AGE openshift-worker available true 34h", "oc get nodes", "NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.29.4 openshift-master-2.openshift.example.com Ready master 30h v1.29.4 openshift-master-3.openshift.example.com Ready master 30h v1.29.4 openshift-worker-0.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-1.openshift.example.com Ready worker 30h v1.29.4", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE openshift-worker-0.example.com 1 1 1 1 55m openshift-worker-1.example.com 1 1 1 1 55m", "oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioning openshift-worker-<num>-65tjz true", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned openshift-worker-<num>-65tjz true", "oc get nodes", "NAME STATUS ROLES AGE VERSION openshift-master-1.openshift.example.com Ready master 30h v1.29.4 openshift-master-2.openshift.example.com Ready master 30h v1.29.4 openshift-master-3.openshift.example.com Ready master 30h v1.29.4 openshift-worker-0.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-1.openshift.example.com Ready worker 30h v1.29.4 openshift-worker-<num>.openshift.example.com Ready worker 3m27s v1.29.4", "ssh openshift-worker-<num>", "[kni@openshift-worker-<num>]USD journalctl -fu kubelet" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-expanding-the-cluster
7.316. thunderbird
7.316. thunderbird 7.316.1. RHSA-2013:1480 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-5590 , CVE-2013-5597 , CVE-2013-5599 , CVE-2013-5600 , CVE-2013-5601 , CVE-2013-5602 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-5595 It was found that the Thunderbird JavaScript engine incorrectly allocated memory for certain functions. An attacker could combine this flaw with other vulnerabilities to execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-5604 A flaw was found in the way Thunderbird handled certain Extensible Stylesheet Language Transformations (XSLT) files. An attacker could combine this flaw with other vulnerabilities to execute arbitrary code with the privileges of the user running Thunderbird. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Jesse Ruderman, Christoph Diehl, Dan Gohman, Byoungyoung Lee, Nils, and Abhishek Arya as the original reporters of these issues. Note: All of the above issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. For technical details regarding these flaws, refer to the Mozilla security advisories for Thunderbird 17.0.10 ESR. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.10 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.2. RHSA-2013:0697 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated for each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-0788 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-0795 A flaw was found in the way Same Origin Wrappers were implemented in Thunderbird. Malicious content could use this flaw to bypass the same-origin policy and execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-0796 A flaw was found in the embedded WebGL library in Thunderbird. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. Note: This issue only affected systems using the Intel Mesa graphics drivers. CVE-2013-0800 An out-of-bounds write flaw was found in the embedded Cairo library in Thunderbird. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-0793 A flaw was found in the way Thunderbird handled the JavaScript history functions. Malicious content could cause a page to be displayed that has a baseURI pointing to a different site, allowing cross-site scripting (XSS) and phishing attacks. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Olli Pettay, Jesse Ruderman, Boris Zbarsky, Christian Holler, Milan Sreckovic, Joe Drew, Cody Crews, miaubiz, Abhishek Arya, and Mariusz Mlynski as the original reporters of these issues. Note: All issues except CVE-2013-0800 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.5 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.3. RHSA-2013:1269 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-1718 , CVE-2013-1722 , CVE-2013-1725 , CVE-2013-1730 , CVE-2013-1732 , CVE-2013-1735 , CVE-2013-1736 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-1737 A flaw was found in the way Thunderbird handled certain DOM JavaScript objects. An attacker could use this flaw to make JavaScript client or add-on code make incorrect, security sensitive decisions. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Andre Bargull, Scoobidiver, Bobby Holley, Reuben Morais, Abhishek Arya, Ms2ger, Sachin Shinde, Aki Helin, Nils, and Boris Zbarsky as the original reporters of these issues. Note: All of the above issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.9 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.4. RHSA-2013:1142 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-1701 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-1710 A flaw was found in the way Thunderbird generated Certificate Request Message Format (CRMF) requests. An attacker could use this flaw to perform cross-site scripting (XSS) attacks or execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-1709 A flaw was found in the way Thunderbird handled the interaction between frames and browser history. An attacker could use this flaw to trick Thunderbird into treating malicious content as if it came from the browser history, allowing for XSS attacks. CVE-2013-1713 It was found that the same-origin policy could be bypassed due to the way Uniform Resource Identifiers (URI) were checked in JavaScript. An attacker could use this flaw to perform XSS attacks, or install malicious add-ons from third-party pages. CVE-2013-1714 It was found that web workers could bypass the same-origin policy. An attacker could use this flaw to perform XSS attacks. CVE-2013-1717 It was found that, in certain circumstances, Thunderbird incorrectly handled Java applets. If a user launched an untrusted Java applet via Thunderbird, the applet could use this flaw to obtain read-only access to files on the user's local system. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Jeff Gilbert, Henrik Skupin, moz_bug_r_a4, Cody Crews, Federico Lanusse, and Georgi Guninski as the original reporters of these issues. Note: All of the above issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.8 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.5. RHSA-2013:0982 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-1682 , CVE-2013-1684 , CVE-2013-1685 , CVE-2013-1686 , CVE-2013-1687 , CVE-2013-1690 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-1692 It was found that Thunderbird allowed data to be sent in the body of XMLHttpRequest (XHR) HEAD requests. In some cases this could allow attackers to conduct Cross-Site Request Forgery (CSRF) attacks. CVE-2013-1693 Timing differences in the way Thunderbird processed SVG image files could allow an attacker to read data across domains, potentially leading to information disclosure. CVE-2013-1694 , CVE-2013-1697 Two flaws were found in the way Thunderbird implemented some of its internal structures (called wrappers). An attacker could use these flaws to bypass some restrictions placed on them. This could lead to unexpected behavior or a potentially exploitable crash. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Gary Kwong, Jesse Ruderman, Andrew McCreight, Abhishek Arya, Mariusz Mlynski, Nils, Johnathan Kuskos, Paul Stone, Boris Zbarsky, and moz_bug_r_a4 as the original reporters of these issues. Note: All of the above issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.7 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.6. RHSA-2013:0627 - Important: thunderbird security update An updated thunderbird package that fixes one security issue is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fix CVE-2013-0787 A flaw was found in the processing of malformed content. Malicious content could cause Thunderbird to crash or execute arbitrary code with the privileges of the user running Thunderbird. Red Hat would like to thank the Mozilla project for reporting this issue. Upstream acknowledges VUPEN Security via the TippingPoint Zero Day Initiative project as the original reporter. Note: This issue cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. It could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which corrects this issue. After installing the update, Thunderbird must be restarted for the changes to take effect. 7.316.7. RHSA-2013:0821 - Important: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-0801 , CVE-2013-1674 , CVE-2013-1675 , CVE-2013-1676 , CVE-2013-1677 , CVE-2013-1678 , CVE-2013-1679 , CVE-2013-1680 , CVE-2013-1681 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-1670 A flaw was found in the way Thunderbird handled Content Level Constructors. Malicious content could use this flaw to perform cross-site scripting (XSS) attacks. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Christoph Diehl, Christian Holler, Jesse Ruderman, Timothy Nikkel, Jeff Walden, Nils, Ms2ger, Abhishek Arya, and Cody Crews as the original reporters of these issues. Note: All of the above issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.6 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/thunderbird
Chapter 6. Creating a kernel-based virtual machine and booting the installation ISO in the VM
Chapter 6. Creating a kernel-based virtual machine and booting the installation ISO in the VM You can create a kernel-based virtual machine (KVM) and start the Red Hat Enterprise Linux installation. The following instructions are specific for installation on a VM. If you are installing RHEL on a physical system, you can skip this section. Procedure Create a virtual machine with the instance of Red Hat Enterprise Linux as a KVM guest operating system, by using the following virt-install command on the KVM host: Additional resources virt-install man page on your system Creating virtual machines by using the command-line interface
[ "virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/installing-under-kvm_rhel-installer
Chapter 52. Updated Drivers
Chapter 52. Updated Drivers Storage Driver Updates The aacraid driver has been updated to version 1.2.1[50792]-custom. The lpfc driver has been updated to version 0:11.2.0.6. The vmw_pvscsi driver has been updated to version 1.0.7.0-k. The megaraid_sas driver has been updated to version 07.701.17.00-rh1. The bfa driver has been updated to version 3.2.25.1. The hpsa driver has been updated to version 3.4.18-0-RH1. The be2iscsi driver has been updated to version 11.2.1.0. The qla2xxx driver has been updated to version 8.07.00.38.07.4-k1. The mpt2sas driver has been updated to version 20.103.00.00. The mpt3sas driver has been updated to version 15.100.00.00. Network Driver Updates The ntb driver has been updated to version 1.0. The igbvf driver has been updated to version 2.4.0-k. The igb driver has been updated to version 5.4.0-k. The ixgbevf driver has been updated to version 3.2.2-k-rh7.4. The i40e driver has been updated to version 1.6.27-k. The fm10k driver has been updated to version 0.21.2-k. The i40evf driver has been updated to version 1.6.27-k. The ixgbe driver has been updated to version 4.4.0-k-rh7.4. The be2net driver has been updated to version 11.1.0.0r. The qede driver has been updated to version 8.10.10.21. The qlge driver has been updated to version 1.00.00.35. The qed driver has been updated to version 8.10.10.21. The bna driver has been updated to version 3.2.25.1r. The bnxt driver has been updated to version 1.7.0. The enic driver has been updated to version 2.3.0.31. The fjes driver has been updated to version 1.2. The hpwdt driver has been updated to version 1.4.02. Graphics Driver and Miscellaneous Driver Updates The vmwgfx driver has been updated to version 2.12.0.0. The hpilo driver has been updated to version 1.5.0.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/updated_drivers
Chapter 4. Deploying Red Hat Quay
Chapter 4. Deploying Red Hat Quay After you have configured your Red Hat Quay deployment, you can deploy it using the following procedures. Prerequisites The Red Hat Quay database is running. The Redis server is running. 4.1. Creating the YAML configuration file Use the following procedure to deploy Red Hat Quay locally. Procedure Enter the following command to create a minimal config.yaml file that is used to deploy the Red Hat Quay container: USD touch config.yaml Copy and paste the following YAML configuration into the config.yaml file: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 CREATE_NAMESPACE_ON_PUSH: true DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default FEATURE_MAILING: false SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 Create a directory to copy the Red Hat Quay configuration bundle to: USD mkdir USDQUAY/config Copy the Red Hat Quay configuration file to the directory: USD cp -v config.yaml USDQUAY/config 4.1.1. Configuring a Red Hat Quay superuser You can optionally add a superuser by editing the config.yaml file to add the necessary configuration fields. The list of superuser accounts is stored as an array in the field SUPER_USERS . Superusers have the following capabilities: User management Organization management Service key management Change log transparency Usage log management Globally-visible user message creation Procedure Add the SUPER_USERS array to the config.yaml file: SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin 1 ... 1 If following this guide, use quayadmin . 4.2. Prepare local storage for image data Use the following procedure to set your local file system to store registry images. Procedure Create a local directory that will store registry images by entering the following command: USD mkdir USDQUAY/storage Set the directory to store registry images: USD setfacl -m u:1001:-wx USDQUAY/storage 4.3. Deploy the Red Hat Quay registry Use the following procedure to deploy the Quay registry container. Procedure Enter the following command to start the Quay registry container, specifying the appropriate volumes for configuration data and local storage for image data:
[ "touch config.yaml", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 CREATE_NAMESPACE_ON_PUSH: true DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default FEATURE_MAILING: false SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379", "mkdir USDQUAY/config", "cp -v config.yaml USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin 1", "mkdir USDQUAY/storage", "setfacl -m u:1001:-wx USDQUAY/storage", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.12.8" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/proof_of_concept_-_deploying_red_hat_quay/poc-deploying-quay
Chapter 7. Dynamic provisioning
Chapter 7. Dynamic provisioning 7.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Dedicated persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Dedicated. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 7.2. Available dynamic provisioning plugins OpenShift Dedicated provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Dedicated cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. IBM Power(R) Virtual Server Block powervs.csi.ibm.com After installation, the IBM Power(R) Virtual Server Block CSI Driver Operator and IBM Power(R) Virtual Server Block CSI Driver automatically create the required storage classes for dynamic provisioning. Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 7.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 7.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 7.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 7.3.3. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 7.3.4. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select pd-ssd , pd-standard , or hyperdisk-balanced . The default is pd-ssd . 7.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/storage/dynamic-provisioning
Chapter 64. Condition schema reference
Chapter 64. Condition schema reference Used in: KafkaBridgeStatus , KafkaConnectorStatus , KafkaConnectStatus , KafkaMirrorMaker2Status , KafkaMirrorMakerStatus , KafkaNodePoolStatus , KafkaRebalanceStatus , KafkaStatus , KafkaTopicStatus , KafkaUserStatus , StrimziPodSetStatus Property Description type The unique identifier of a condition, used to distinguish between other conditions in the resource. string status The status of the condition, either True, False or Unknown. string lastTransitionTime Last time the condition of a type changed from one status to another. The required format is 'yyyy-MM-ddTHH:mm:ssZ', in the UTC time zone. string reason The reason for the condition's last transition (a single word in CamelCase). string message Human-readable message indicating details about the condition's last transition. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-Condition-reference
Chapter 5. Optional: Enabling disk encryption
Chapter 5. Optional: Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the UI, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts
[ "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq", "tang-show-keys <port>", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "sudo dnf install jose", "sudo jose jwk thp -i /var/db/tang/<public_key>.jwk", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/assembly_enabling-disk-encryption
Chapter 49. Platform HTTP
Chapter 49. Platform HTTP Since Camel 3.0 Only consumer is supported The Platform HTTP is used to allow Camel to use the existing HTTP server from the runtime. For example when running Camel on Spring Boot, Quarkus, or other runtimes. Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency> 49.1. Platform HTTP Provider To use Platform HTTP a provider (engine) is required to be available on the classpath. The purpose is to have drivers for different runtimes such as Quarkus, VertX, or Spring Boot. At this moment there is only support for Quarkus and VertX by camel-platform-http-vertx . This JAR must be on the classpath otherwise the Platform HTTP component cannot be used and an exception will be thrown on startup. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency> 49.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 49.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 49.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 49.2.3. Component Options The Platform HTTP component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean engine (advanced) An HTTP Server engine implementation to serve the requests. PlatformHttpEngine 49.2.4. Endpoint Options The Platform HTTP endpoint is configured using URI syntax: with the following path and query parameters: 49.2.4.1. Path Parameters (1 parameters) Name Description Default Type path (consumer) Required The path under which this endpoint serves the HTTP requests, for proxy use 'proxy'. String 49.2.4.2. Query Parameters (11 parameters) Name Description Default Type consumes (consumer) The content type this endpoint accepts as an input, such as application/xml or application/json. null or / mean no restriction. String httpMethodRestrict (consumer) A comma separated list of HTTP methods to serve, e.g. GET,POST . If no methods are specified, all methods will be served. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. true boolean produces (consumer) The content type this endpoint produces, such as application/xml or application/json. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) A comma or whitespace separated list of file extensions. Uploads having these extensions will be stored locally. Null value or asterisk () will allow all files. String headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter headers to and from Camel message. HeaderFilterStrategy platformHttpEngine (advanced) An HTTP Server engine implementation to serve the requests of this endpoint. PlatformHttpEngine 49.3. Spring Boot Auto-Configuration When using platform-http with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency> The component supports 4 options, which are listed below. Name Description Default Type camel.component.platform-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.platform-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.platform-http.enabled Whether to enable auto configuration of the platform-http component. This is enabled by default. Boolean camel.component.platform-http.engine An HTTP Server engine implementation to serve the requests. The option is a org.apache.camel.component.platform.http.spi.PlatformHttpEngine type. PlatformHttpEngine 49.3.1. Implementing a reverse proxy Platform HTTP component can act as a reverse proxy, in that case some headers are populated from the absolute URL received on the request line of the HTTP request. Those headers are specific to the underlining platform. At this moment, this feature is only supported for Vert.x in camel-platform-http-vertx component.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> </dependency>", "platform-http:path", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-platform-http-component-starter
3.3. Editing an Image Builder blueprint with command-line interface
3.3. Editing an Image Builder blueprint with command-line interface This procedure describes how to edit an existing Image Builder blueprint in the command-line interface. Procedure 1. Save (export) the blueprint to a local text file: 2. Edit the BLUEPRINT-NAME.toml file with a text editor of your choice and make your changes. 3. Before finishing the edits, make sure the file is a valid blueprint: Remove this line, if present: Increase the version number. Remember that Image Builder blueprint versions must use the Semantic Versioning scheme. Note also that if you do not change the version, the patch component of version is increased automatically. Check if the contents are valid TOML specifications. See the TOML documentation for more information. Note TOML documentation is a community product and is not supported by Red Hat. You can report any issues with the tool at https://github.com/toml-lang/toml/issues 4. Save the file and close the editor. 5. Push (import) the blueprint back into Image Builder: Note that you must supply the file name including the .toml extension, while in other commands you use only the name of the blueprint. 6. To verify that the contents uploaded to Image Builder match your edits, list the contents of blueprint: 7. Check whether the components and versions listed in the blueprint and their dependencies are valid:
[ "composer-cli blueprints save BLUEPRINT-NAME", "packages = []", "composer-cli blueprints push BLUEPRINT-NAME.toml", "composer-cli blueprints show BLUEPRINT-NAME", "composer-cli blueprints depsolve BLUEPRINT-NAME" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-Documentation-Image_Builder-Test_Chapter3-Test_Section_3
Chapter 7. Uninstalling IdM Servers and Replicas
Chapter 7. Uninstalling IdM Servers and Replicas To uninstall both an IdM server and an IdM replica, pass the --uninstall option to the ipa-server-install command: [root@ipareplica ~]# ipa-server-install --uninstall
[ "ipa-server-install --uninstall" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Uninstalling_IPA_Servers
Chapter 1. Building applications overview
Chapter 1. Building applications overview Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform. After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift CLI ( oc ) . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift CLI ( oc ). With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application, you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Connecting an application to services An application uses backing services to build and connect workloads, which vary according to the service provider. Using the Service Binding Operator , as a developer, you can bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. You can apply service binding also on IBM Power, IBM Z, and IBM(R) LinuxONE environments . 1.2.4. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/building-applications-overview
Appendix D. Producer configuration parameters
Appendix D. Producer configuration parameters key.serializer Type: class Importance: high Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface. value.serializer Type: class Importance: high Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface. acks Type: string Default: 1 Valid Values: [all, -1, 0, 1] Importance: high The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1 . acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). buffer.memory Type: long Default: 33554432 Valid Values: [0,... ] Importance: high The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception. This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. compression.type Type: string Default: none Importance: high The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none , gzip , snappy , lz4 , or zstd . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: high Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. batch.size Type: int Default: 16384 Valid Values: [0,... ] Importance: medium The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . If set to default (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses. client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. delivery.timeout.ms Type: int Default: 120000 (2 minutes) Valid Values: [0,... ] Importance: medium An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms . linger.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay-that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5 , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load. max.block.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium The configuration controls how long the KafkaProducer's `send() , partitionsFor() , initTransactions() , sendOffsetsToTransaction() , commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout. max.request.size Type: int Default: 1048576 Valid Values: [0,... ] Importance: medium The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. partitioner.class Type: class Default: org.apache.kafka.clients.producer.internals.DefaultPartitioner Importance: medium Partitioner class that implements the org.apache.kafka.clients.producer.Partitioner interface. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. enable.idempotence Type: boolean Default: false Importance: low When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5, retries to be greater than 0 and acks must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a ConfigException will be thrown. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. max.in.flight.requests.per.connection Type: int Default: 5 Valid Values: [1,... ] Importance: low The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.max.idle.ms Type: long Default: 300000 (5 minutes) Valid Values: [5000,... ] Importance: low Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the access to it will force a metadata fetch request. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. transaction.timeout.ms Type: int Default: 60000 (1 minute) Importance: low The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTxnTimeoutException error. transactional.id Type: string Default: null Valid Values: non-empty string Importance: low The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/producer-configuration-parameters-str
Chapter 3. Compatibility Matrix for Red Hat Ceph Storage 5.2
Chapter 3. Compatibility Matrix for Red Hat Ceph Storage 5.2 The following tables list products and their versions compatible with Red Hat Ceph Storage 5.2. Host Operating System Version Notes Red Hat Enterprise Linux 9.1, 9.0, 8.6, 8.5, and EUS 8.4 Standard lifecycle 9.1 is included in the product (recommended). Red Hat Enterprise Linux EUS is optional. Important All nodes in the cluster and their clients must use the supported OS version(s) to ensure that the version of the ceph package is the same on all nodes. Using different versions of the ceph package is not supported. Important Red Hat no longer supports using Ubuntu as a host operating system to deploy Red Hat Ceph Storage. Product Version Notes Ansible Supported in a limited capacity. Supported for upgrade and conversion to Cephadm and for other minimal playbooks. Red Hat OpenShift 3.x RBD,Cinder and CephFS drivers are supported. Red Hat OpenShift Data Foundation See the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker for detailed external mode version compatibility. Red Hat OpenStack Platform 16.1, 16.2, and 17.0 Red Hat OpenStack Platform 16.1z8 and 16.2z1 are supported only with external Red Hat Ceph Storage 5.2. Red Hat OpenStack Platform 17.0 supports Red Hat Ceph Storage 5.2 through both external and director deployed. Red Hat Satellite 6.x Only registering with the Content Delivery Network (CDN) is supported. Registering with Red Hat Network (RHN) is deprecated and not supported. Client Connector Version Notes S3A 2.8.x, 3.2.x, and trunk Red Hat Enterprise Linux iSCSI Initiator The latest versions of the iscsi-initiator-utils and device-mapper-multipath packages VMware vSphere versions 6.7 and 7 are supported Red Hat Ceph Storage as a backup target Version Notes CommVault Cloud Data Management v11 IBM Spectrum Protect Plus 10.1.5 IBM Spectrum Protect server 8.1.8 NetApp AltaVault 4.3.2 and 4.4 Rubrik Cloud Data Management (CDM) 3.2 onwards Trilio, TrilioVault 3.0 S3 target Veeam (object storage) Veeam Availability Suite 9.5 Update 4 Supported on Red Hat Ceph Storage object storage with the S3 protocol Veritas NetBackup for Symantec OpenStorage (OST) cloud backup 7.7 and 8.0 Independent Software vendors Version Notes IBM Spectrum Discover 2.0.3 WekaIO 3.12.2
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/compatibility_guide/compatibility-matrix-for-red-hat-ceph-storage-5-2
Chapter 12. Ansible Automation Platform Resource Operator
Chapter 12. Ansible Automation Platform Resource Operator 12.1. Resource Operator overview Resource Operator is a custom resource (CR) that you can deploy after you have created your platform gateway deployment. With Resource Operator you can define projects, job templates, and inventories through the use of YAML files. These YAML files are then used by automation controller to create these resources. You can create the YAML through the Form view that prompts you for keys and values for your YAML code. Alternatively, to work with YAML directly, you can select YAML view . There are currently two custom resources provided by the Resource Operator: AnsibleJob: launches a job in the automation controller instance specified in the Kubernetes secret (automation controller host URL, token). JobTemplate: creates a job template in the automation controller instance specified. 12.2. Using Resource Operator The Resource Operator itself does not do anything until the user creates an object. As soon as the user creates an AutomationControllerProject or AnsibleJob resource, the Resource Operator will start processing that object. Prerequisites Install the Kubernetes-based cluster of your choice. Deploy automation controller using the automation-controller-operator . After installing the automation-controller-resource-operator in your cluster, you must create a Kubernetes (k8s) secret with the connection information for your automation controller instance. Then you can use Resource Operator to create a k8s resource to manage your automation controller instance. 12.3. Connecting Resource Operator to platform gateway To connect Resource Operator with platform gateway you need to create a k8s secret with the connection information for your automation controller instance. Note You can only create OAuth 2 Tokens for your own user through the API or UI, which means you can only configure or view tokens from your own user profile. Procedure To create an OAuth2 token for your user in the platform gateway UI: Log in to Red Hat OpenShift Container Platform. In the navigation panel, select Access Management Users . Select the username you want to create a token for. Select Tokens Automation Execution Click Create Token . You can leave Applications empty. Add a description and select Read or Write for the Scope . Note Make sure you provide a valid user when creating tokens. Otherwise, you will get an error message that you tried to issue the command without specifying a user, or supplying a username that does not exist. 12.4. Creating a automation controller connection secret for Resource Operator To make your connection information available to the Resource Operator, create a k8s secret with the token and host value. Procedure The following is an example of the YAML for the connection secret. Save the following example to a file, for example, automation-controller-connection-secret.yml . apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/ Edit the file with your host and token value. Apply it to your cluster by running the kubectl create command: kubectl create -f controller-connection-secret.yml 12.5. Creating an AnsibleJob Launch an automation job on automation controller by creating an AnsibleJob resource. Procedure Specify the connection secret and job template you want to launch. apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template Configure features such as, inventory, extra variables, and time to live for the job. spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: "provision,install,configuration" # Specify tags to run skip_tags: "configuration,restart" # Skip tasks with a given tag Note You must enable prompt on launch for inventories and extra variables if you are configuring those. To enable Prompt on launch , within the automation controller UI: From the Resources Templates page, select your template and select the Prompt on launch checkbox to Inventory and Variables sections. Launch a workflow job template with an AnsibleJob object by specifying the workflow_template_name instead of job_template_name : apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template 12.6. Creating a JobTemplate Create a job template on automation controller by creating a JobTemplate resource: apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory
[ "apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/", "create -f controller-connection-secret.yml", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template", "spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: \"provision,install,configuration\" # Specify tags to run skip_tags: \"configuration,restart\" # Skip tasks with a given tag", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template", "apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/assembly-controller-resource-operator
Chapter 57. Protobuf Serialize Action
Chapter 57. Protobuf Serialize Action Serialize payload to Protobuf 57.1. Configuration Options The following table summarizes the configuration options available for the protobuf-serialize-action Kamelet: Property Name Description Type Default Example schema * Schema The Protobuf schema to use during serialization (as single-line) string "message Person { required string first = 1; required string last = 2; }" Note Fields marked with an asterisk (*) are mandatory. 57.2. Dependencies At runtime, the protobuf-serialize-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core camel:jackson-protobuf 57.3. Usage This section describes how you can use the protobuf-serialize-action . 57.3.1. Knative Action You can use the protobuf-serialize-action Kamelet as an intermediate step in a Knative binding. protobuf-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 57.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 57.3.1.2. Procedure for using the cluster CLI Save the protobuf-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f protobuf-serialize-action-binding.yaml 57.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 57.3.2. Kafka Action You can use the protobuf-serialize-action Kamelet as an intermediate step in a Kafka binding. protobuf-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first": "John", "last":"Doe"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: "message Person { required string first = 1; required string last = 2; }" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 57.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 57.3.2.2. Procedure for using the cluster CLI Save the protobuf-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f protobuf-serialize-action-binding.yaml 57.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name protobuf-serialize-action-binding timer-source?message='{"first":"John","last":"Doe"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 57.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/protobuf-serialize-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\": \"John\", \"last\":\"Doe\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f protobuf-serialize-action-binding.yaml", "kamel bind --name protobuf-serialize-action-binding timer-source?message='{\"first\":\"John\",\"last\":\"Doe\"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: protobuf-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\": \"John\", \"last\":\"Doe\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: protobuf-serialize-action properties: schema: \"message Person { required string first = 1; required string last = 2; }\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f protobuf-serialize-action-binding.yaml", "kamel bind --name protobuf-serialize-action-binding timer-source?message='{\"first\":\"John\",\"last\":\"Doe\"}' --step json-deserialize-action --step protobuf-serialize-action -p step-1.schema='message Person { required string first = 1; required string last = 2; }' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/protobuf-serialize-action
20.16.2. Filesystems
20.16.2. Filesystems A filesystems directory on the host physical machine that can be accessed directly from the guest virtual machine virtual machine ... <devices> <filesystem type='template'> <source name='my-vm-template'/> <target dir='/'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='path' wrpolicy='immediate'/> <source dir='/export/to/guest'/> <target dir='/import/from/host'/> <readonly/> </filesystem> ... </devices> ... Figure 20.24. Devices - filesystems The filesystem attribute has the following possible values: type ='mount' - Specifies the host physical machine directory to mount in the guest virtual machine. This is the default type if one is not specified. This mode also has an optional sub-element driver , with an attribute type ='path' or type ='handle' . The driver block has an optional attribute wrpolicy that further controls interaction with the host physical machine page cache; omitting the attribute reverts to the default setting, while specifying a value immediate means that a host physical machine writeback is immediately triggered for all pages touched during a guest virtual machine file write operation type ='template' - Specifies the OpenVZ filesystem template and is only used by OpenVZ driver. type ='file' - Specifies that a host physical machine file will be treated as an image and mounted in the guest virtual machine. This filesystem format will be autodetected and is only used by LXC driver. type ='block' - Specifies the host physical machine block device to mount in the guest virtual machine. The filesystem format will be autodetected and is only used by LXC driver. type ='ram' - Specifies that an in-memory filesystem, using memory from the host physical machine OS will be used. The source element has a single attribute usage which gives the memory usage limit in kibibytes and is only used by LXC driver. type ='bind' - Specifies a directory inside the guest virtual machine which will be bound to another directory inside the guest virtual machine. This element is only used by LXC driver. accessmode which specifies the security mode for accessing the source. Currently this only works with type='mount' for the QEMU/KVM driver. The possible values are: passthrough - Specifies that the source is accessed with the User's permission settings that are set from inside the guest virtual machine. This is the default access mode if one is not specified. mapped - Specifies that the source is accessed with the permission settings of the hypervisor. squash - Similar to 'passthrough' , the exception is that failure of privileged operations like chown are ignored. This makes a passthrough-like mode usable for people who run the hypervisor as non-root. <source> - Specifies that the resource on the host physical machine that is being accessed in the guest virtual machine. The name attribute must be used with <type='template'> , and the dir attribute must be used with <type='mount'> . The usage attribute is used with <type='ram'> to set the memory limit in KB. target - Dictates where the source drivers can be accessed in the guest virtual machine. For most drivers this is an automatic mount point, but for QEMU-KVM this is merely an arbitrary string tag that is exported to the guest virtual machine as a hint for where to mount. readonly - Enables exporting the file sydtem as a readonly mount for guest virtual machine, by default read-write access is given. space_hard_limit - Specifies the maximum space available to this guest virtual machine's filesystem space_soft_limit - Specifies the maximum space available to this guest virtual machine's filesystem. The container is permitted to exceed its soft limits for a grace period of time. Afterwards the hard limit is enforced.
[ "<devices> <filesystem type='template'> <source name='my-vm-template'/> <target dir='/'/> </filesystem> <filesystem type='mount' accessmode='passthrough'> <driver type='path' wrpolicy='immediate'/> <source dir='/export/to/guest'/> <target dir='/import/from/host'/> <readonly/> </filesystem> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-section-libvirt-dom-xml-devices-filesystems
A. Revision History
A. Revision History Revision History Revision 1-1 2015-02-25 Laura Bailey Rebuild with sort_order Revision 0-40.2.400 2013-10-31 Rudiger Landmann Rebuild with publican 4.0.0 Revision 0-40.2 Thu Dec 13 2012 Martin Prpic Added a note about removal of multilib Python packages. Revision 0-39 Fri May 20 2011 Ryan Lerch Copyedit in the Installation section Revision 1-0 Tue Mar 22 2011 Ryan Lerch Initial Version of the Red Hat Enterprise Linux 6.1 Release Notes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/appe-publican-revision_history
function::assert
function::assert Name function::assert - evaluate assertion Synopsis Arguments expression The expression to evaluate msg The formatted message string Description This function checks the expression and aborts the current running probe if expression evaluates to zero. Uses error and may be caught by try{} catch{}.
[ "assert(expression:,msg:)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-assert
5.11.4. Setting Up Polyinstantiated Directories
5.11.4. Setting Up Polyinstantiated Directories The /tmp/ and /var/tmp/ directories are normally used for temporary storage by all programs, services, and users. Such setup, however, makes these directories vulnerable to race condition attacks, or an information leak based on file names. SELinux offers a solution in the form of polyinstantiated directories. This effectively means that both /tmp/ and /var/tmp/ are instantiated, making them appear private for each user. When instantiation of directories is enabled, each user's /tmp/ and /var/tmp/ directory is automatically mounted under /tmp-inst and /var/tmp/tmp-inst . Follow these steps to enable polyinstantiation of directories: Uncomment the last three lines in the /etc/security/namespace.conf file to enable instantiation of the /tmp/ , /var/tmp/ , and users' home directories: Ensure that in the /etc/pam.d/login file, the pam_namespace.so module is configured for session: Reboot your system.
[ "~]USD tail -n 3 /etc/security/namespace.conf /tmp /tmp-inst/ level root,adm /var/tmp /var/tmp/tmp-inst/ level root,adm USDHOME USDHOME/USDUSER.inst/ level", "~]USD grep namespace /etc/pam.d/login session required pam_namespace.so" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/polyinstantiated-directories
Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables
Chapter 1. Compiling your Red Hat build of Quarkus applications to native executables As an application developer, you can use Red Hat build of Quarkus 3.8 to create microservices written in Java that run on OpenShift Container Platform and serverless environments. Quarkus applications can run as regular Java applications (on top of a Java Virtual Machine), or be compiled into native executables. Applications compiled to native executables have a smaller memory footprint and faster startup times than their Java counterpart. This guide shows you how to compile the Red Hat build of Quarkus 3.8 Getting Started project into a native executable and how to configure and test the native executable. You will need the application that you created earlier in Getting started with Red Hat build of Quarkus . Building a native executable with Red Hat build of Quarkus covers: Building a native executable with a single command by using a container runtime such as Podman or Docker Creating a custom container image by using the produced native executable Creating a container image by using the OpenShift Container Platform Docker build strategy Deploying the Quarkus native application to OpenShift Container Platform Configuring the native executable Testing the native executable Prerequisites Have the JAVA_HOME environment variable set to specify the location of the Java SDK. Log in to the Red Hat Customer Portal to download Red Hat build of OpenJDK from the Software Downloads page. An Open Container Initiative (OCI) compatible container runtime, such as Podman or Docker. A completed Quarkus Getting Started project. To learn how to build the Quarkus Getting Started project, see Getting started with Red Hat build of Quarkus . Alternatively, you can download the Quarkus Quickstarts archive or clone the Quarkus Quickstarts Git repository. The sample project is in the getting-started directory. 1.1. Producing a native executable A native binary is an executable that is created to run on a specific operating system and CPU architecture. The following list outlines some examples of a native executable: An ELF binary for Linux AMD 64 bits An EXE binary for Windows AMD 64 bits An ELF binary for ARM 64 bits Note Only the ELF binary for Linux AMD 64 bits is supported in Red Hat build of Quarkus. When you build a native executable, one advantage is that your application and dependencies, including the JVM, are packaged into a single file. The native executable for your application contains the following items: The compiled application code The required Java libraries A reduced version of the virtual machine (VM) for improved application startup times and minimal disk and memory footprint, which is also tailored for the application code and its dependencies To produce a native executable from your Quarkus application, you can select either an in-container build or a local-host build. The following table explains the different building options that you can use: Table 1.1. Building options for producing a native executable Building option Requires Uses Results in Benefits In-container build - Supported A container runtime, for example, Podman or Docker The default registry.access.redhat.com/quarkus/mandrel-for-jdk-21-rhel8:23.1 builder image A Linux 64-bit executable using the CPU architecture of the host GraalVM does not need to be set up locally, which makes your CI pipelines run more efficiently Local-host build - Only supported upstream A local installation of GraalVM or Mandrel Its local installation as a default for the quarkus.native.builder-image property An executable that has the same operating system and CPU architecture as the machine on which the build is executed An alternative for developers that are not allowed or do not want to use tools such as Docker or Podman. Overall, it is faster than the in-container build approach. Important Red Hat build of Quarkus 3.8 only supports the building of native Linux executables by using a Java 21-based Red Hat build of Quarkus Native builder image, which is a productized distribution of Mandrel . While other images are available in the community, they are not supported in the product, so you should not use them for production builds that you want Red Hat to provide support for. Applications whose source is written based on 17, with no Java 18 - 21 features used, can still compile a native executable of that application by using the Java 21-based Mandrel 23.1 base image. Building native executables by using Oracle GraalVM Community Edition (CE), Mandrel community edition, or any other distributions of GraalVM is not supported for Red Hat build of Quarkus. 1.1.1. Producing a native executable by using an in-container build To create a native executable and run the native image tests, use the native profile that is provided by Red Hat build of Quarkus for an in-container build. Prerequisites Podman or Docker is installed. The container has access to at least 8GB of memory. Procedure Open the Getting Started project pom.xml file, and verify that the project includes the native profile: <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> Build a native executable by using one of the following methods: Using Maven: For Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true For Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Using the Quarkus CLI: For Docker: quarkus build --native -Dquarkus.native.container-build=true For Podman: quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary produced by Quarkus. The target directory is a directory that Maven creates when you build a Maven application. Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. To run the native executable, enter the following command: ./target/*-runner Additional resources Native executable configuration properties 1.1.2. Producing a native executable by using a local-host build If you are not using Docker or Podman, use the Quarkus local-host build option to create and run a native executable. Using the local-host build approach is faster than using containers and is suitable for machines that use a Linux operating system. Important Red Hat build of Quarkus does not support using the following procedure in production. Use this method only when testing or as a backup approach when Docker or Podman is not available. Prerequisites A local installation of Mandrel or GraalVm, correctly configured according to the Building a native executable guide. Additionally, for a GraalVM installation, native-image must also be installed. Procedure For GraalVM or Mandrel, build a native executable by using one of the following methods: Using Maven: ./mvnw package -Dnative Using the Quarkus CLI: quarkus build --native Step results These commands create a *-runner binary in the target directory, where the following applies: The *-runner file is the built native binary that Quarkus produces. The target directory is a directory that Maven creates when you build a Maven application. Note When you build the native executable, the prod profile is enabled unless modified in the quarkus.profile property. Run the native executable: ./target/*-runner Additional resources For more information, see the Producing a native executable section of the Quarkus "Building a native executable" guide. 1.2. Creating a custom container image You can create a container image from your Quarkus application by using one of the following methods: Creating a container manually Creating a container by using the OpenShift Container Platform Docker build Important Compiling a Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. 1.2.1. Creating a container manually You can manually create a container image with your application for Linux AMD64. When you produce a native image by using the Quarkus Native container, the native image creates an executable that targets Linux AMD64. If your host operating system is different from Linux AMD64, you cannot run the binary directly and you need to create a container manually. Your Quarkus Getting Started project includes a Dockerfile.native in the src/main/docker directory with the following content: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.9 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT ["./application", "-Dquarkus.http.host=0.0.0.0"] Note Universal Base Image (UBI) The following list displays the suitable images for use with Dockerfiles. Red Hat Universal Base Image 8 (UBI8). This base image is designed and engineered to be the base layer for all of your containerized applications, middleware, and utilities. Red Hat Universal Base Image 8 Minimal (UBI8-minimal). A stripped-down UBI8 image that uses microdnf as a package manager. All Red Hat Base images are available on the Container images catalog site. Procedure Build a native Linux executable by using one of the following methods: Docker: ./mvnw package -Dnative -Dquarkus.native.container-build=true Podman: ./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman Build the container image by using one of the following methods: Docker: docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started Podman podman build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started Run the container by using one of the following methods: Docker: docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started Podman: podman run -i --rm -p 8080:8080 quarkus-quickstart/getting-started 1.2.2. Creating a container by using the OpenShift Docker build You can create a container image for your Quarkus application by using the OpenShift Container Platform Docker build strategy. This strategy creates a container image by using a build configuration in the cluster. Prerequisites You have access to an OpenShift Container Platform cluster and the latest version of the oc tool installed. For information about installing oc , see Installing the CLI in the Installing and configuring OpenShift Container Platform clusters guide. A URL for the OpenShift Container Platform API endpoint. Procedure Log in to the OpenShift CLI: oc login -u <username_url> Create a new project in OpenShift: oc new-project <project_name> Create a build config based on the src/main/docker/Dockerfile.native file: cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile - Build the project: oc start-build <build_name> --from-dir . Deploy the project to OpenShift Container Platform: oc new-app <build_name> Expose the services: oc expose svc/ <build_name> 1.3. Native executable configuration properties Configuration properties define how the native executable is generated. You can configure your Quarkus application by using the application.properties file. Configuration properties The following table lists the configuration properties that you can set to define how the native executable is generated: Property Description Type Default quarkus.native.debug.enabled If debug is enabled and debug symbols are generated, the symbols are generated in a separate .debug file. boolean false quarkus.native.resources.excludes A comma-separated list of globs to match resource paths that should not be added to the native image. list of strings quarkus.native.additional-build-args Additional arguments to pass to the build process. list of strings quarkus.native.enable-http-url-handler Enables HTTP URL handler, with which you can do URL.openConnection() for HTTP URLs. boolean true quarkus.native.enable-https-url-handler Enables HTTPS URL handler, with which you can do URL.openConnection() for HTTPS URLs. boolean false quarkus.native.enable-all-security-services Adds all security services to the native image. boolean false quarkus.native.add-all-charsets Adds all character sets to the native image. This increases the image size. boolean false quarkus.native.graalvm-home Contains the path of the GraalVM distribution. string USD{GRAALVM_HOME:} quarkus.native.java-home Contains the path of the JDK. file USD{java.home} quarkus.native.native-image-xmx The maximum Java heap used to generate the native image. string quarkus.native.debug-build-process Waits for a debugger to attach to the build process before running the native image build. This is an advanced option for those familiar with GraalVM internals. boolean false quarkus.native.publish-debug-build-process-port Publishes the debug port when building with docker if debug-build-process is true . boolean true quarkus.native.cleanup-server Restarts the native image server. boolean false quarkus.native.enable-isolates Enables isolates to improve memory management. boolean true quarkus.native.enable-fallback-images Creates a JVM-based fallback image if the native image fails. boolean false quarkus.native.enable-server Uses the native image server. This can speed up compilation but can result in lost changes due to cache invalidation issues. boolean false quarkus.native.auto-service-loader-registration Automatically registers all META-INF/services entries. boolean false quarkus.native.dump-proxies Dumps the bytecode of all proxies for inspection. boolean false quarkus.native.container-build Builds that use a container runtime. Docker is used by default. boolean false quarkus.native.builder-image The docker image to build the image. string registry.access.redhat.com/quarkus/mandrel-for-jdk-21-rhel8:23.1 quarkus.native.container-runtime The container runtime used to build the image. For example, Docker. string quarkus.native.container-runtime-options Options to pass to the container runtime. list of strings quarkus.native.enable-vm-inspection Enables VM introspection in the image. boolean false quarkus.native.full-stack-traces Enables full stack traces in the image. boolean true quarkus.native.enable-reports Generates reports on call paths and included packages, classes, or methods. boolean false quarkus.native.report-exception-stack-traces Reports exceptions with a full stack trace. boolean true quarkus.native.report-errors-at-runtime Reports errors at runtime. This might cause your application to fail at runtime if you use unsupported features. boolean false quarkus.native.resources.includes A comma-separated list of globs to match resource paths that should be added to the native image. Use a slash ( / ) character as a path separator on all platforms. Globs must not start with a slash. For example, if you have src/main/resources/ignored.png and src/main/resources/foo/selected.png in your source tree and one of your dependency JARs contains a bar/some.txt file, with quarkus.native.resources.includes set to foo/ ,bar/ /*.txt , the files src/main/resources/foo/selected.png and bar/some.txt will be included in the native image, while src/main/resources/ignored.png will not be included. For more information, see the following table, which lists the supported glob features. list of strings quarkus.native.debug.enabled Enables debugging and generates debug symbols in a separate .debug file. When used with quarkus.native.container-build , Red Hat build of Quarkus only supports Red Hat Enterprise Linux or other Linux distributions as they contain the binutils package that installs the objcopy utility that splits the debug info from the native image. boolean false During build configuration, you can use glob patterns if you want to include a set of files or resources that share a common pattern or location in your project. For example, if you have a directory that contains multiple configuration files, you can use a glob pattern to include all files within that directory. For example: quarkus.native.resources.includes = my/config/files/* The following example shows a comma-separated list of globs to match resource paths to add to the native image. These patterns result in adding any .png images found on the classpath to the native image as well as all files that end with .txt under the folder bar even if nested under subdirectories: quarkus.native.resources.includes = **/*.png,bar/**/*.txt Supported glob features The following table lists the supported glob features and descriptions: Character Feature description * Matches a possibly-empty sequence of characters that does not contain slash ( / ). ** Matches a possibly-empty sequence of characters that might contain slash ( / ). ? Matches one character, but not slash. [abc] Matches one character specified in the bracket, but not slash. [a-z] Matches one character from the range specified in the bracket, but not slash. [!abc] Matches one character not specified in the bracket; does not match slash. [!a-z] Matches one character outside the range specified in the bracket; does not match slash. {one,two,three} Matches any of the alternating tokens separated by commas; the tokens can contain wildcards, nested alternations, and ranges. \ The escape character. There are three levels of escaping: application.properties parser, MicroProfile Config list converter, and Glob parser. All three levels use the backslash as the escape character. Additional resources Configuring your Red Hat build of Quarkus applications 1.3.1. Configuring memory consumption for Red Hat build of Quarkus native compilation Compiling a Red Hat build of Quarkus application to a native executable consumes a large amount of memory during analysis and optimization. You can limit the amount of memory used during native compilation by setting the quarkus.native.native-image-xmx configuration property. Setting low memory limits might increase the build time. Procedure Use one of the following methods to set a value for the quarkus.native.native-image-xmx property to limit the memory consumption during the native image build time: Using the application.properties file: quarkus.native.native-image-xmx= <maximum_memory> Setting system properties: mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory> This command builds the native executable with Docker. To use Podman, add the -Dquarkus.native.container-runtime=podman argument. Note For example, to set the memory limit to 8 GB, enter quarkus.native.native-image-xmx=8g . The value must be a multiple of 1024 and greater than 2MB. Append the letter m or M to indicate megabytes, or g or G to indicate gigabytes. 1.4. Testing the native executable Test the application in native mode to test the functionality of the native executable. Use the @QuarkusIntegrationTest annotation to build the native executable and run tests against the HTTP endpoints. Important The following example shows how to test a native executable with a local installation of GraalVM or Mandrel. Before you begin, consider the following points: Red Hat build of Quarkus does not support this scenario, as outlined in Producing a native executable . The native executable you are testing with here must match the operating system and architecture of the host. Therefore, this procedure does not work on a macOS or an in-container build. Procedure Open the pom.xml file and verify that the build section has the following elements: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> The Maven Failsafe plugin ( maven-failsafe-plugin ) runs the integration test and indicates the location of the native executable that is generated. Open the src/test/java/org/acme/GreetingResourceIT.java file and verify that it includes the following content: package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. } 1 Use another test runner that starts the application from the native file before the tests. The executable is retrieved by using the native.image.path system property configured in the Maven Failsafe plugin. 2 This example extends the GreetingResourceTest , but you can also create a new test. Run the test: ./mvnw verify -Dnative The following example shows the output of this command: ./mvnw verify -Dnative .... GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable)... ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.1.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red&#160;Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total .... [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ 2024-06-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.8.6.SP3-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started --- Note Quarkus waits 60 seconds for the native image to start before automatically failing the native tests. You can change this duration by configuring the quarkus.test.wait-time system property. You can extend the wait time by using the following command, where <duration> is the wait time in seconds: Note By default, native tests run by using the prod profile unless modified in the quarkus.test.native-image-profile property. 1.4.1. Excluding tests when running as a native executable When you run tests against your native executable, you can only run black-box testing, for example, interacting with the HTTP endpoints of your application. Note Black box refers to the hidden internal workings of a product or program, such as in black-box testing. Because tests do not run natively, you cannot link against your application's code like you do when running tests on the JVM. Therefore, in your native tests, you cannot inject beans. You can share your test class between your JVM and native executions and exclude certain tests by using the @DisabledOnIntegrationTest annotation to run tests only on the JVM. 1.4.2. Testing an existing native executable By using the Failsafe Maven plugin, you can test against the existing executable build. You can run multiple sets of tests in stages on the binary after it is built. Note To test the native executable that you produced with Quarkus, use the available Maven commands. There are no equivalent Quarkus CLI commands to complete this task by using the command line. Procedure Run a test against a native executable that is already built: ./mvnw test-compile failsafe:integration-test -Dnative This command runs the test against the existing native image by using the Failsafe Maven plugin. Alternatively, you can specify the path to the native executable with the following command where <path> is the native image path: ./mvnw test-compile failsafe:integration-test -Dnative.image.path=<path> 1.5. Additional resources Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Developing and compiling your Red Hat build of Quarkus applications with Apache Maven Quarkus community: Building a native executable Apache Maven Project Red Hat Universal Base Image 8 Minimal The List of UBI-minimal Tags Revised on 2025-02-28 13:38:24 UTC
[ "<profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles>", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "quarkus build --native -Dquarkus.native.container-build=true", "quarkus build --native -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "./target/*-runner", "./mvnw package -Dnative", "quarkus build --native", "./target/*-runner", "FROM registry.access.redhat.com/ubi8/ubi-minimal:8.9 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 ENTRYPOINT [\"./application\", \"-Dquarkus.http.host=0.0.0.0\"]", "registry.access.redhat.com/ubi8/ubi:8.9", "registry.access.redhat.com/ubi8/ubi-minimal:8.9", "./mvnw package -Dnative -Dquarkus.native.container-build=true", "./mvnw package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.container-runtime=podman", "docker build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started", "build -f src/main/docker/Dockerfile.native -t quarkus-quickstart/getting-started", "docker run -i --rm -p 8080:8080 quarkus-quickstart/getting-started", "run -i --rm -p 8080:8080 quarkus-quickstart/getting-started", "login -u <username_url>", "new-project <project_name>", "cat src/main/docker/Dockerfile.native | oc new-build --name <build_name> --strategy=docker --dockerfile -", "start-build <build_name> --from-dir .", "new-app <build_name>", "expose svc/ <build_name>", "quarkus.native.resources.includes = my/config/files/*", "quarkus.native.resources.includes = **/*.png,bar/**/*.txt", "quarkus.native.native-image-xmx= <maximum_memory>", "mvn package -Dnative -Dquarkus.native.container-build=true -Dquarkus.native.native-image-xmx=<maximum_memory>", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin>", "package org.acme; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest 1 public class GreetingResourceIT extends GreetingResourceTest { 2 // Execute the same tests but in native mode. }", "./mvnw verify -Dnative", "./mvnw verify -Dnative . GraalVM Native Image: Generating 'getting-started-1.0.0-SNAPSHOT-runner' (executable) ======================================================================================================================== [1/8] Initializing... (6.6s @ 0.22GB) Java version: 17.0.7+7, vendor version: Mandrel-23.1.0.0-Final Graal compiler: optimization level: 2, target machine: x86-64-v3 C compiler: gcc (redhat, x86_64, 13.2.1) Garbage collector: Serial GC (max heap size: 80% of RAM) 2 user-specific feature(s) - io.quarkus.runner.Feature: Auto-generated class by Red&#160;Hat build of Quarkus from the existing extensions - io.quarkus.runtime.graal.DisableLoggingFeature: Disables INFO logging during the analysis phase [2/8] Performing analysis... [******] (40.0s @ 2.05GB) 10,318 (86.40%) of 11,942 types reachable 15,064 (57.36%) of 26,260 fields reachable 52,128 (55.75%) of 93,501 methods reachable 3,298 types, 109 fields, and 2,698 methods registered for reflection 63 types, 68 fields, and 55 methods registered for JNI access 4 native libraries: dl, pthread, rt, z [3/8] Building universe... (5.9s @ 1.31GB) [4/8] Parsing methods... [**] (3.7s @ 2.08GB) [5/8] Inlining methods... [***] (2.0s @ 1.92GB) [6/8] Compiling methods... [******] (34.4s @ 3.25GB) [7/8] Layouting methods... [[7/8] Layouting methods... [**] (4.1s @ 1.78GB) [8/8] Creating image... [**] (4.5s @ 2.31GB) 20.93MB (48.43%) for code area: 33,233 compilation units 21.95MB (50.80%) for image heap: 285,664 objects and 8 resources 337.06kB ( 0.76%) for other data 43.20MB in total . [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:integration-test (default) @ getting-started --- [INFO] Using auto detected provider org.apache.maven.surefire.junitplatform.JUnitPlatformProvider [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.acme.GreetingResourceIT __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ 2024-06-27 14:04:52,681 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Quarkus 3.8.6.SP3-redhat-00002) started in 0.038s. Listening on: http://0.0.0.0:8081 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Profile prod activated. 2024-06-27 14:04:52,682 INFO [io.quarkus] (main) Installed features: [cdi, resteasy-reactive, smallrye-context-propagation, vertx] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.696 s - in org.acme.GreetingResourceIT [INFO] [INFO] Results: [INFO] [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M7:verify (default) @ getting-started ---", "./mvnw verify -Dnative -Dquarkus.test.wait-time= <duration>", "./mvnw test-compile failsafe:integration-test -Dnative", "./mvnw test-compile failsafe:integration-test -Dnative.image.path=<path>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/compiling_your_red_hat_build_of_quarkus_applications_to_native_executables/assembly_quarkus-building-native-executable_quarkus-building-native-executable
Chapter 11. Completing initial setup
Chapter 11. Completing initial setup The Initial Setup window opens the first time you reboot your system after the installation process is complete, if you have selected the Server with GUI base environment during installation. If you have registered and installed RHEL from the CDN, the Subscription Manager option displays a note that all installed products are covered by valid entitlements. The information displayed in the Initial Setup window might vary depending on what was configured during installation. At a minimum, the Licensing and Subscription Manager options are displayed. Prerequisites You have completed the graphical installation. You have an active, non-evaluation Red Hat Enterprise Linux subscription. Procedure From the Initial Setup window, select Licensing Information . The License Agreement window opens and displays the licensing terms for Red Hat Enterprise Linux. Review the license agreement and select the I accept the license agreement checkbox. You must accept the license agreement to proceed. Exiting Initial Setup without accepting license agreement causes a system restart. When the restart process is complete, you are prompted to accept the license agreement again. Click Done to apply the settings and return to the Initial Setup window. Optional: Click Finish Configuration , if you did not configure network settings earlier as you cannot register your system immediately. Red Hat Enterprise Linux 8 starts and you can login, activate access to the network, and register your system. See Subscription manager post installation for more information. If you have configured network settings, as described in Network hostname , you can register your system immediately, as shown in the following steps. From the Initial Setup window, select Subscription Manager . The Subscription Manager graphical interface opens and displays the option you are going to register, which is: subscription.rhsm.redhat.com . To register with Activation Key, select I will use an activation key . For more information about how to view activation keys, see Creating and managing activation keys . Click . Do one of the following: If you selected to register by using the activation key, enter the Organization (your Organization ID) and Activation key . To manually attach the subscription, select the Manually attach subscriptions after registration option. If you are not using activation keys and manual registration, enter your Login and Password details. Enter System Name . Click Register . Confirm the Subscription details and click Attach . You must receive the following confirmation message: Registration with Red Hat Subscription Management is Done! Click Done . The Initial Setup window opens. Click Finish Configuration . The login window opens. Configure your system. See the Configuring basic system settings document for more information. Methods to register RHEL Depending on your requirements, there are five methods to register your system: Using the Red Hat Content Delivery Network (CDN) to register your system, attach RHEL subscriptions, and install Red Hat Enterprise Linux. During installation by using Initial Setup . After installation by using the command line. After installation by using the Subscription Manager user interface. After installation by using Registration Assistant. Registration Assistant is designed to help you choose the most suitable registration option for your Red Hat Enterprise Linux environment. See https://access.redhat.com/labs/registrationassistant/ for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/completing-initial-setup_rhel-installer
Chapter 5. Installing RHEL AI on Google Cloud Platform (GCP)
Chapter 5. Installing RHEL AI on Google Cloud Platform (GCP) For installing and deploying Red Hat Enterprise Linux AI on Google Cloud Platform, you must first convert the RHEL AI image into an GCP image. You can then launch an instance using the GCP image and deploy RHEL AI on a Google Cloud Platform machine. 5.1. Converting the RHEL AI image into a Google Cloud Platform image. To create a bootable image in Google Cloud Platform you must configure your Google Cloud Platform account, create an Google Cloud Storage bucket, and create an Google Cloud Platform image using the RHEL AI raw image. Prerequisites You installed the Google Cloud Platform CLI on your specific machine. For more information about installing the GCP CLI, see Install the Google Cloud Platform CLI on Linux . You must be on a Red Hat Enterprise Linux version 9.2 - 9.4 system Your machine must have an additional 100 GB of disk space. Procedure Log in to Google Cloud Platform with the following command: USD gcloud auth login Example output of the login. USD gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID You need to set up some Google Cloud Platform configurations and create your GCP Storage Container before creating the GCP image. Configure Google Cloud Platform CLI to use your project. USD gcloud_project=your-gcloud-project USD gcloud config set project USDgcloud_project Create an environment variable defining the region where you want to operate. USD gcloud_region=us-central1 Create a Google Cloud Platform Storage Container. USD gcloud_bucket=name-for-your-bucket USD gsutil mb -l USDgcloud_region gs://USDgcloud_bucket Now that your GCP Storage Container is set up, you need to download the GCP tar.gz image from the Red Hat Enterprise Linux AI download page . Set the name you want to use as the RHEL AI Google Cloud Platform image. USD image_name=rhel-ai-1-3 Upload the tar.gz file to the Google Cloud Platform Storage Container by running the following command: USD gsutil cp rhelai_gcp.tar.gz "gs://USD{gcloud_bucket}/USDimage_name.tar.gz" Create an Google Cloud Platform image from the tar.gz file you just uploaded with the following command: USD gcloud compute images create \ "USDimage_name" \ --source-uri="gs://USD{gcloud_bucket}/USDimage_name.tar.gz" \ --family "rhel-ai" \ --guest-os-features=GVNIC 5.2. Deploying your instance on Google Cloud Platform using the CLI You can launch an instance with your new RHEL AI Google Cloud Platform image from the Google Cloud Platform web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an Google Cloud Platform instance with the custom Google Cloud Platform image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI Google Cloud Platform image. For more information, see "Converting the RHEL AI image to a Google Cloud Platform image". You installed the Google Cloud Platform CLI on your specific machine, see Install the Google Cloud Platform CLI on Linux . Procedure Log in to your Google Cloud Platform account by running the following command: USD gcloud auth login Before launching your Google Cloud Platform instance on the CLI, you need to create several configuration variables for your instance. You need to select the instance profile that you want to use for the deployment. List all the profiles in the desired region by running the following command: USD gcloud compute machine-types list --zones=<zone> Make a note of your preferred machine type, you will need it for your instance deployment. You can now start creating your Google Cloud Platform instance. Populate environment variables for when you create the instance. name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator="type=nvidia-h100-80gb,count=8" image=my-custom-rhelai-image disk_size=1024 subnet=default Configure the zone to be used. USD gcloud config set compute/zone USDzone You can now launch your instance, by running the following command: USD gcloud compute instances create \ USD{name} \ --machine-type USD{machine_type} \ --image USDimage \ --zone USDzone \ --subnet USDsubnet \ --boot-disk-size USD{disk_size} \ --boot-disk-device-name USD{name} \ --accelerator=USDaccelerator Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train
[ "gcloud auth login", "gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID", "gcloud_project=your-gcloud-project gcloud config set project USDgcloud_project", "gcloud_region=us-central1", "gcloud_bucket=name-for-your-bucket gsutil mb -l USDgcloud_region gs://USDgcloud_bucket", "image_name=rhel-ai-1-3", "gsutil cp rhelai_gcp.tar.gz \"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\"", "gcloud compute images create \"USDimage_name\" --source-uri=\"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\" --family \"rhel-ai\" --guest-os-features=GVNIC", "gcloud auth login", "gcloud compute machine-types list --zones=<zone>", "name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator=\"type=nvidia-h100-80gb,count=8\" image=my-custom-rhelai-image disk_size=1024 subnet=default", "gcloud config set compute/zone USDzone", "gcloud compute instances create USD{name} --machine-type USD{machine_type} --image USDimage --zone USDzone --subnet USDsubnet --boot-disk-size USD{disk_size} --boot-disk-device-name USD{name} --accelerator=USDaccelerator", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/<user>/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat generate data generate serve model serve train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/installing/installing_gcp
function::task_uid
function::task_uid Name function::task_uid - The user identifier of the task. Synopsis Arguments task task_struct pointer. General Syntax task_uid:long(task:long) Description This function returns the user id of the given task.
[ "function task_uid:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-uid
33.8. Managing Print Jobs
33.8. Managing Print Jobs When you send a print job to the printer daemon, such as printing a text file from Emacs or printing an image from The GIMP , the print job is added to the print spool queue. The print spool queue is a list of print jobs that have been sent to the printer and information about each print request, such as the status of the request, the the job number, and more. During the printing process, the Printer Status icon appears in the Notification Area on the panel. To check the status of a print job, double click the Printer Status, which displays a window similar to Figure 33.12, "GNOME Print Status" . Figure 33.12. GNOME Print Status To cancel a specific print job listed in the GNOME Print Status , select it from the list and select Edit => Cancel Documents from the pulldown menu. To view the list of print jobs in the print spool from a shell prompt, type the command lpq . The last few lines look similar to the following: Example 33.1. Example of lpq output If you want to cancel a print job, find the job number of the request with the command lpq and then use the command lprm job number . For example, lprm 902 would cancel the print job in Example 33.1, "Example of lpq output" . You must have proper permissions to cancel a print job. You can not cancel print jobs that were started by other users unless you are logged in as root on the machine to which the printer is attached. You can also print a file directly from a shell prompt. For example, the command lpr sample.txt prints the text file sample.txt . The print filter determines what type of file it is and converts it into a format the printer can understand.
[ "Rank Owner/ID Class Job Files Size Time active user@localhost+902 A 902 sample.txt 2050 01:20:46" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-printing-managing
19.2. Problems After Installation
19.2. Problems After Installation 19.2.1. Remote Graphical Desktops and XDMCP If you have installed the X Window System and would like to log in to your Red Hat Enterprise Linux system using a graphical login manager, enable the X Display Manager Control Protocol (XDMCP). This protocol allows users to remotely log in to a desktop environment from any X-compatible client, such as a network-connected workstation or X11 terminal. The procedure below explains how to enable XDMCP. Procedure 19.3. Enabling XDMCP on IBM Z Open the /etc/gdm/custom.conf configuration file in a plain text editor such as vi or nano . In the custom.conf file, locate the section starting with [xdmcp] . In this section, add the following line: Save the file, and exit the text editor. Restart the X Window System . To do this, either reboot the whole system, or restart the GNOME Display Manager using the following command as root : Wait for the login prompt to appear again, and log in using your normal user name and password. The IBM Z server is now configured for XDMCP. You can connect to it from another workstation (client) by starting a remote X session using the X command on the client workstation. For example: Replace address with the host name of the remote X11 server. The command connects to the remote X11 server using XDMCP and displays the remote graphical login screen on display :1 of the X11 server system (usually accessible by pressing Ctrl - Alt - F8 ). You can also access remote desktop sessions using a nested X11 server, which opens the remote desktop as a window in your current X11 session. Xnest allows users to open a remote desktop nested within their local X11 session. For example, run Xnest using the following command, replacing address with the host name of the remote X11 server: For more information about XDMCP, see the X Window System documentation at http://www.x.org/releases/X11R7.6/doc/libXdmcp/xdmcp.html . 19.2.2. Is Your System Displaying Signal 11 Errors? A signal 11 error, commonly known as a segmentation fault , means that a program accessed a memory location that was not assigned to it. A signal 11 error can occur due to a bug in one of the software programs that is installed, or faulty hardware. If you receive a fatal signal 11 error during the installation, first make sure you are using the most recent installation images, and let Anaconda verify them to make sure they are not corrupted. Bad installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verifying the integrity of the installation media is recommended before every installation. For information about obtaining the most recent installation media, see Chapter 2, Downloading Red Hat Enterprise Linux . To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. See Section 23.2.2, "Verifying Boot Media" for details. Other possible causes are beyond this document's scope. Consult your hardware manufacturer's documentation for more information.
[ "Enable=true", "systemctl restart gdm.service", "X :1 -query address", "Xnest :1 -query address" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-trouble-after-s390
Chapter 4. RHV for IBM Power
Chapter 4. RHV for IBM Power This release supports Red Hat Enterprise Linux 8 hosts on IBM POWER8 little endian hardware and Red Hat Enterprise Linux 7 and 8 virtual machines on emulated IBM POWER8 hardware. From Red Hat Virtualization 4.2.6 Red Hat Enterprise Linux hosts are supported on IBM POWER9 little endian hardware and Red Hat Enterprise Linux 8 virtual machines on emulated IBM POWER9 hardware. Important releases of RHV for IBM Power required Red Hat Enterprise Linux hosts on POWER8 hardware to be installed from an ISO image. These hosts cannot be updated for use with this release. You must reinstall Red Hat Enterprise Linux 8 hosts using the repositories outlined below. The packages provided in the following repositories are required to install and configure aspects of a Red Hat Virtualization environment on POWER8 hardware. Table 4.1. Required Subscriptions and Repositories for IBM POWER8, little endian hardware Component Subscription Pool Repository Name Repository Label Details Red Hat Virtualization Manager Red Hat Virtualization for IBM Power Red Hat Virtualization for IBM Power rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms Provides the Red Hat Virtualization Manager for use with IBM POWER8 hosts. The Manager itself must be installed on x86_64 architecture. Red Hat Enterprise Linux 8 hosts, little endian Red Hat Enterprise Linux for Power, little endian RHV Management Agent for IBM Power, little endian rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. Provides additional packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. Red Hat Enterprise Linux 8 virtual machines, big endian Red Hat Enterprise Linux for Power, big endian RHV Tools for IBM Power rhv-4-tools-for-rhel-8-ppc64le-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 8 virtual machines on emulated IBM Power (big endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 8 clients. Red Hat Enterprise Linux 8 virtual machines, little endian Red Hat Enterprise Linux for Power, little endian RHV Tools for IBM Power, little endian rhv-4-tools-for-rhel-8-ppc64le-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 8 virtual machines on emulated IBM Power (little endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 8 clients. Table 4.2. Required Subscriptions and Repositories for IBM POWER9, little endian hardware Component Subscription Pool Repository Name Repository Label Details Red Hat Virtualization Manager Red Hat Virtualization for IBM Power Red Hat Virtualization for IBM Power rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms Provides the Red Hat Virtualization Manager for use with IBM POWER9 hosts. The Manager itself must be installed on x86_64 architecture. Red Hat Enterprise Linux 8 hosts, little endian Red Hat Enterprise Linux for Power, little endian RHV Management Agent for IBM Power, little endian rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. Red Hat Enterprise Linux 8 hosts, little endian Red Hat Enterprise Linux for Power, little endian Red Hat Enterprise Linux for IBM Power, little endian rhel-8-for-ppc64le-baseos-rpms rhel-8-for-ppc64le-appstream-rpms Provides additional packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. Red Hat Enterprise Linux 8 virtual machines, big endian Red Hat Enterprise Linux for Power, big endian RHV Tools for IBM Power rhv-4-tools-for-rhel-8-ppc64le-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 8 virtual machines on emulated IBM Power (big endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 8 clients. Red Hat Enterprise Linux 8 virtual machines, little endian Red Hat Enterprise Linux for Power, little endian RHV Tools for IBM Power, little endian rhv-4-tools-for-rhel-8-ppc64le-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 8 virtual machines on emulated IBM Power (little endian) hardware. The guest agents allow you to monitor virtual machine resources on Red Hat Enterprise Linux 8 clients. Note If the virtual machine fails to boot on IBM POWER9, it might be because of the risk level setting on your firmware. To resolve this issue, see the Troubleshooting scenarios in Starting a virtual machine . Unsupported Features for IBM POWER The following Red Hat Virtualization features are not supported: SPICE display SmartCard Sound device Guest SSO Integration with OpenStack Networking (Neutron), OpenStack Image (Glance), and OpenStack Volume (Cinder) Self-hosted engine Red Hat Virtualization Host (RHVH) Disk Block Alignment For a full list of bugs that affect the RHV for IBM Power release, see Red Hat Private BZ#1444027.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/chap-rhev_for_ibm_power
Chapter 44. Adding host entries from IdM Web UI
Chapter 44. Adding host entries from IdM Web UI This chapter introduces hosts in Identity Management (IdM) and the operation of adding a host entry in the IdM Web UI. 44.1. Hosts in IdM Identity Management (IdM) manages these identities: Users Services Hosts A host represents a machine. As an IdM identity, a host has an entry in the IdM LDAP, that is the 389 Directory Server instance of the IdM server. The host entry in IdM LDAP is used to establish relationships between other hosts and even services within the domain. These relationships are part of delegating authorization and control to hosts within the domain. Any host can be used in host-based access control (HBAC) rules. IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine that belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. IdM domain provides three main services specifically for machines: DNS Kerberos Certificate management Hosts in IdM are closely connected with the services running on them: Service entries are associated with a host. A host stores both the host and the service Kerberos principals. 44.2. Host enrollment This section describes enrolling hosts as IdM clients and what happens during and after the enrollment. The section compares the enrollment of IdM hosts and IdM users. The section also outlines alternative types of authentication available to hosts. Enrolling a host consists of: Creating a host entry in IdM LDAP: possibly using the ipa host-add command in IdM CLI, or the equivalent IdM Web UI operation . Configuring IdM services on the host, for example the System Security Services Daemon (SSSD), Kerberos, and certmonger, and joining the host to the IdM domain. The two actions can be performed separately or together. If performed separately, they allow for dividing the two tasks between two users with different levels of privilege. This is useful for bulk deployments. The ipa-client-install command can perform the two actions together. The command creates a host entry in IdM LDAP if that entry does not exist yet, and configures both the Kerberos and SSSD services for the host. The command brings the host within the IdM domain and allows it to identify the IdM server it will connect to. If the host belongs to a DNS zone managed by IdM, ipa-client-install adds DNS records for the host too. The command must be run on the client. 44.3. User privileges required for host enrollment The host enrollment operation requires authentication to prevent an unprivileged user from adding unwanted machines to the IdM domain. The privileges required depend on several factors, for example: If a host entry is created separately from running ipa-client-install If a one-time password (OTP) is used for enrollment User privileges for optionally manually creating a host entry in IdM LDAP The user privilege required for creating a host entry in IdM LDAP using the ipa host-add CLI command or the IdM Web UI is Host Administrators . The Host Administrators privilege can be obtained through the IT Specialist role. User privileges for joining the client to the IdM domain Hosts are configured as IdM clients during the execution of the ipa-client-install command. The level of credentials required for executing the ipa-client-install command depends on which of the following enrolling scenarios you find yourself in: The host entry in IdM LDAP does not exist. For this scenario, you need a full administrator's credentials or the Host Administrators role. A full administrator is a member of the admins group. The Host Administrators role provides privileges to add hosts and enroll hosts. For details about this scenario, see Installing a client using user credentials: interactive installation . The host entry in IdM LDAP exists. For this scenario, you need a limited administrator's credentials to execute ipa-client-install successfully. The limited administrator in this case has the Enrollment Administrator role, which provides the Host Enrollment privilege. For details, Installing a client using user credentials: interactive installation . The host entry in IdM LDAP exists, and an OTP has been generated for the host by a full or limited administrator. For this scenario, you can install an IdM client as an ordinary user if you run the ipa-client-install command with the --password option, supplying the correct OTP. For details, see Installing a client by using a one-time password: Interactive installation . After enrollment, IdM hosts authenticate every new session to be able to access IdM resources. Machine authentication is required for the IdM server to trust the machine and to accept IdM connections from the client software installed on that machine. After authenticating the client, the IdM server can respond to its requests. 44.4. Enrollment and authentication of IdM hosts and users: comparison There are many similarities between users and hosts in IdM, some of which can be observed during the enrollment stage as well as those that concern authentication during the deployment stage. The enrollment stage ( User and host enrollment ): An administrator can create an LDAP entry for both a user and a host before the user or host actually join IdM: for the stage user, the command is ipa stageuser-add ; for the host, the command is ipa host-add . A file containing a key table or, abbreviated, keytab, a symmetric key resembling to some extent a user password, is created during the execution of the ipa-client-install command on the host, resulting in the host joining the IdM realm. Analogically, a user is asked to create a password when they activate their account, therefore joining the IdM realm. While the user password is the default authentication method for a user, the keytab is the default authentication method for a host. The keytab is stored in a file on the host. Table 44.1. User and host enrollment Action User Host Pre-enrollment USD ipa stageuser-add user_name [--password] USD ipa host-add host_name [--random] Activating the account USD ipa stageuser-activate user_name USD ipa-client install [--password] (must be run on the host itself) The deployment stage ( User and host session authentication ): When a user starts a new session, the user authenticates using a password; similarly, every time it is switched on, the host authenticates by presenting its keytab file. The System Security Services Daemon (SSSD) manages this process in the background. If the authentication is successful, the user or host obtains a Kerberos ticket granting ticket (TGT). The TGT is then used to obtain specific tickets for specific services. Table 44.2. User and host session authentication User Host Default means of authentication Password Keytabs Starting a session (ordinary user) USD kinit user_name [switch on the host] The result of successful authentication TGT to be used to obtain access to specific services TGT to be used to obtain access to specific services TGTs and other Kerberos tickets are generated as part of the Kerberos services and policies defined by the server. The initial granting of a Kerberos ticket, the renewing of the Kerberos credentials, and even the destroying of the Kerberos session are all handled automatically by the IdM services. Alternative authentication options for IdM hosts Apart from keytabs, IdM supports two other types of machine authentication: SSH keys. The SSH public key for the host is created and uploaded to the host entry. From there, the System Security Services Daemon (SSSD) uses IdM as an identity provider and can work in conjunction with OpenSSH and other services to reference the public keys located centrally in IdM. Machine certificates. In this case, the machine uses an SSL certificate that is issued by the IdM server's certificate authority and then stored in IdM's Directory Server. The certificate is then sent to the machine to present when it authenticates to the server. On the client, certificates are managed by a service called certmonger . 44.5. Host entry in IdM LDAP An Identity Management (IdM) host entry contains information about the host and what attributes it can contain. An LDAP host entry contains all relevant information about the client within IdM: Service entries associated with the host The host and service principal Access control rules Machine information, such as its physical location and operating system Note Note that the IdM Web UI Identity Hosts tab does not show all the information about a particular host stored in the IdM LDAP. Host entry configuration properties A host entry can contain information about the host that is outside its system configuration, such as its physical location, MAC address, keys, and certificates. This information can be set when the host entry is created if it is created manually. Alternatively, most of this information can be added to the host entry after the host is enrolled in the domain. Table 44.3. Host Configuration Properties UI Field Command-Line Option Description Description --desc = description A description of the host. Locality --locality = locality The geographic location of the host. Location --location = location The physical location of the host, such as its data center rack. Platform --platform = string The host hardware or architecture. Operating system --os = string The operating system and version for the host. MAC address --macaddress = address The MAC address for the host. This is a multi-valued attribute. The MAC address is used by the NIS plug-in to create a NIS ethers map for the host. SSH public keys --sshpubkey = string The full SSH public key for the host. This is a multi-valued attribute, so multiple keys can be set. Principal name (not editable) --principalname = principal The Kerberos principal name for the host. This defaults to the host name during the client installation, unless a different principal is explicitly set in the -p . This can be changed using the command-line tools, but cannot be changed in the UI. Set One-Time Password --password = string This option sets a password for the host which can be used in bulk enrollment. - --random This option generates a random password to be used in bulk enrollment. - --certificate = string A certificate blob for the host. - --updatedns This sets whether the host can dynamically update its DNS entries if its IP address changes. 44.6. Adding host entries from the Web UI Open the Identity tab, and select the Hosts subtab. Click Add at the top of the hosts list. Figure 44.1. Adding Host Entries Enter the machine name and select the domain from the configured zones in the drop-down list. If the host has already been assigned a static IP address, then include that with the host entry so that the DNS entry is fully created. The Class field has no specific purpose at the moment. Figure 44.2. Add Host Wizard DNS zones can be created in IdM. If the IdM server does not manage the DNS server, the zone can be entered manually in the menu area, like a regular text field. Note Select the Force check box if you want to skip checking whether the host is resolvable via DNS. Click the Add and Edit button to go directly to the expanded entry page and enter more attribute information. Information about the host hardware and physical location can be included with the host entry. Figure 44.3. Expanded Entry Page
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/adding-hosts-ui_configuring-and-managing-idm
Chapter 5. Downloading PDF policy reports
Chapter 5. Downloading PDF policy reports The Insights for Red Hat Enterprise Linux compliance service enables you to create PDF reports for individual policies to share with stakeholders, such as your compliance team or auditors. Reports include the following information: Policy details: policy type, operation system, compliance threshold, and business objective. Percentage of systems in compliance with the policy. Numbers of non-compliant and compliant systems. Non-compliant system information. You can select to include compliant systems when creating the report. List of top 10 failed rules: the most severe failed rules, along with the greatest number of failed systems for each rule, are ranked at the top. 5.1. Creating a PDF report for a policy Perform the following steps to download a PDF report for a security policy. Prerequisites You must be logged in to the Red Hat Hybrid Cloud Console. Policy reports are point-in-time reporting. Red Hat recommends uploading your latest system data to Insights for Red Hat Enterprise Linux before creating a policy report in the compliance service. Procedure Optionally run insights-client --compliance on your systems to scan them and upload current data to the compliance service. Navigate to Security > Compliance > Reports . Locate the policy for which to create the report. Click the download icon on the far right of the same row as the policy name. Note You can also click on the policy name and click Download PDF in the upper right of the page. In the Compliance report modal dialog, make selections for system data to include. Make selections for rule data to include. Optionally, add user notes. Click Export report .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports_with_fedramp/assembly-compl-policy-reports
Chapter 17. IPPool [whereabouts.cni.cncf.io/v1alpha1]
Chapter 17. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IPPoolSpec defines the desired state of IPPool 17.1.1. .spec Description IPPoolSpec defines the desired state of IPPool Type object Required allocations range Property Type Description allocations object Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. allocations{} object IPAllocation represents metadata about the pod/container owner of a specific IP range string Range is a RFC 4632/4291-style string that represents an IP address and prefix length in CIDR notation 17.1.2. .spec.allocations Description Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. Type object 17.1.3. .spec.allocations{} Description IPAllocation represents metadata about the pod/container owner of a specific IP Type object Required id podref Property Type Description id string ifname string podref string 17.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/ippools GET : list objects of kind IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools DELETE : delete collection of IPPool GET : list objects of kind IPPool POST : create an IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} DELETE : delete an IPPool GET : read the specified IPPool PATCH : partially update the specified IPPool PUT : replace the specified IPPool 17.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/ippools HTTP method GET Description list objects of kind IPPool Table 17.1. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty 17.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools HTTP method DELETE Description delete collection of IPPool Table 17.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IPPool Table 17.3. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty HTTP method POST Description create an IPPool Table 17.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.5. Body parameters Parameter Type Description body IPPool schema Table 17.6. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 202 - Accepted IPPool schema 401 - Unauthorized Empty 17.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} Table 17.7. Global path parameters Parameter Type Description name string name of the IPPool HTTP method DELETE Description delete an IPPool Table 17.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 17.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IPPool Table 17.10. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IPPool Table 17.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.12. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IPPool Table 17.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.14. Body parameters Parameter Type Description body IPPool schema Table 17.15. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/ippool-whereabouts-cni-cncf-io-v1alpha1
14.7.12. Triggering a Reset for a Node
14.7.12. Triggering a Reset for a Node The nodedev-reset nodedev command triggers a device reset for the specified nodedev . Running this command is useful prior to transferring a node device between guest virtual machine passthrough and the host physical machine. libvirt will do this action implicitly when required, but this command allows an explicit reset when needed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-numa_node_management-triggering_a_reset_for_a_node
Chapter 80. Openshift Build Config
Chapter 80. Openshift Build Config Since Camel 2.17 Only producer is supported The OpenShift Build Config component is one of the Kubernetes Components which provides a producer to execute Openshift Build Configs operations. 80.1. Dependencies When using openshift-build-configs with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 80.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 80.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 80.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 80.3. Component Options The Openshift Build Config component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 80.4. Endpoint Options The Openshift Build Config endpoint is configured using URI syntax: with the following path and query parameters: 80.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 80.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 80.5. Message Headers The Openshift Build Config component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesBuildConfigsLabels (producer) Constant: KUBERNETES_BUILD_CONFIGS_LABELS The Openshift Config Build labels. Map CamelKubernetesBuildConfigName (producer) Constant: KUBERNETES_BUILD_CONFIG_NAME The Openshift Config Build name. String 80.6. Supported producer operation listBuildConfigs listBuildConfigsByLabels getBuildConfig 80.7. Openshift Build Configs Producer Examples listBuilds: this operation list the Build Configs on an Openshift cluster. from("direct:list"). toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs"). to("mock:result"); This operation returns a List of Builds from your Openshift cluster. listBuildsByLabels: this operation list the build configs by labels on an Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILD_CONFIGS_LABELS, labels); } }); toF("openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels"). to("mock:result"); This operation returns a List of Build configs from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 80.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "openshift-build-configs:masterUrl", "from(\"direct:list\"). toF(\"openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigs\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILD_CONFIGS_LABELS, labels); } }); toF(\"openshift-build-configs:///?kubernetesClient=#kubernetesClient&operation=listBuildConfigsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-build-config-component-starter
Chapter 1. High level overview of Debezium
Chapter 1. High level overview of Debezium Debezium is a set of distributed services that capture changes in your databases. Your applications can consume and respond to those changes. Debezium captures each row-level change in each database table in a change event record and streams these records to Kafka topics. Applications read these streams, which provide the change event records in the same order in which they were generated. More details are in the following sections: Section 1.1, "Debezium Features" Section 1.2, "Description of Debezium architecture" 1.1. Debezium Features Debezium is a set of source connectors for Apache Kafka Connect. Each connector ingests changes from a different database by using that database's features for change data capture (CDC). Unlike other approaches, such as polling or dual writes, log-based CDC as implemented by Debezium: Ensures that all data changes are captured . Produces change events with a very low delay while avoiding increased CPU usage required for frequent polling. For example, for MySQL or PostgreSQL, the delay is in the millisecond range. Requires no changes to your data model , such as a "Last Updated" column. Can capture deletes . Can capture old record state and additional metadata such as transaction ID and causing query, depending on the database's capabilities and configuration. Five Advantages of Log-Based Change Data Capture is a blog post that provides more details. Debezium connectors capture data changes with a range of related capabilities and options: Snapshots: optionally, an initial snapshot of a database's current state can be taken if a connector is started and not all logs still exist. Typically, this is the case when the database has been running for some time and has discarded transaction logs that are no longer needed for transaction recovery or replication. There are different modes for performing snapshots, including support for incremental snapshots, which can be triggered at connector runtime. For more details, see the documentation for the connector that you are using. Filters: you can configure the set of captured schemas, tables and columns with include/exclude list filters. Masking: the values from specific columns can be masked, for example, when they contain sensitive data. Monitoring: most connectors can be monitored by using JMX. Ready-to-use single message transformations (SMTs) for message routing, filtering, event flattening, and more. For more information about the SMTs that Debezium provides, see Applying transformations to modify messages exchanged with Apache Kafka . The documentation for each connector provides details about the connectors features and configuration options. 1.2. Description of Debezium architecture You deploy Debezium by means of Apache Kafka Connect . Kafka Connect is a framework and runtime for implementing and operating: Source connectors such as Debezium that send records into Kafka Sink connectors that propagate records from Kafka topics to other systems The following image shows the architecture of a change data capture pipeline based on Debezium: As shown in the image, the Debezium connectors for MySQL and PostgresSQL are deployed to capture changes to these two types of databases. Each Debezium connector establishes a connection to its source database: The MySQL connector uses a client library for accessing the binlog . The PostgreSQL connector reads from a logical replication stream. Kafka Connect operates as a separate service besides the Kafka broker. By default, changes from one database table are written to a Kafka topic whose name corresponds to the table name. If needed, you can adjust the destination topic name by configuring Debezium's topic routing transformation . For example, you can: Route records to a topic whose name is different from the table's name Stream change event records for multiple tables into a single topic After change event records are in Apache Kafka, different connectors in the Kafka Connect eco-system can stream the records to other systems and databases such as Elasticsearch, data warehouses and analytics systems, or caches such as Infinispan. Depending on the chosen sink connector, you might need to configure Debezium's new record state extraction transformation. This Kafka Connect SMT propagates the after structure from Debezium's change event to the sink connector. This is in place of the verbose change event record that is propagated by default.
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/high-level-overview-of-debezium
18.11. Security Policy
18.11. Security Policy The Security Policy spoke allows you to configure the installed system following restrictions and recommendations ( compliance policies ) defined by the Security Content Automation Protocol (SCAP) standard. This functionality is provided by an add-on which has been enabled by default since Red Hat Enterprise Linux 7.2. When enabled, the packages necessary to provide this functionality will automatically be installed. However, by default, no policies are enforced, meaning that no checks are performed during or after installation unless specifically configured. The Red Hat Enterprise Linux 7 Security Guide provides detailed information about security compliance including background information, practical examples, and additional resources. Important Applying a security policy is not necessary on all systems. This screen should only be used when a specific policy is mandated by your organization rules or government regulations. If you apply a security policy to the system, it will be installed using restrictions and recommendations defined in the selected profile. The openscap-scanner package will also be added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. After the installation finishes, the system will be automatically scanned to verify compliance. The results of this scan will be saved to the /root/openscap_data directory on the installed system. Pre-defined policies which are available in this screen are provided by SCAP Security Guide . See the OpenSCAP Portal for links to detailed information about each available profile. You can also load additional profiles from an HTTP, HTTPS or FTP server. Figure 18.7. Security policy selection screen To configure the use of security policies on the system, first enable configuration by setting the Apply security policy switch to ON . If the switch is in the OFF position, controls in the rest of this screen have no effect. After enabling security policy configuration using the switch, select one of the profiles listed in the top window of the screen, and click the Select profile below. When a profile is selected, a green check mark will appear on the right side, and the bottom field will display whether any changes will be made before beginning the installation. Note None of the profiles available by default perform any changes before the installation begins. However, loading a custom profile as described below can require some pre-installation actions. To use a custom profile, click the Change content button in the top left corner. This will open another screen where you can enter an URL of a valid security content. To go back to the default security content selection screen, click Use SCAP Security Guide in the top left corner. Custom profiles can be loaded from an HTTP , HTTPS or FTP server. Use the full address of the content, including the protocol (such as http:// ). A network connection must be active (enabled in Section 18.13, "Network & Hostname" ) before you can load a custom profile. The content type will be detected automatically by the installer. After you select a profile, or if you want to leave the screen, click Done in the top left corner to return to Section 18.7, "The Installation Summary Screen" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-security-policy-s390
Managing configurations using Puppet integration
Managing configurations using Puppet integration Red Hat Satellite 6.15 Configure Puppet integration in Satellite and use Puppet classes to configure your hosts Red Hat Satellite Documentation Team [email protected]
[ "satellite-installer --enable-foreman-cli-puppet --enable-foreman-plugin-puppet --enable-puppet --foreman-proxy-puppet true --foreman-proxy-puppetca true --puppet-server true", "satellite-installer --enable-puppet --foreman-proxy-puppet true --foreman-proxy-puppetca true --puppet-server true", "<%= snippet 'puppet_setup' %>", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "satellite-maintain plugin purge-puppet --remove-all-data", "satellite-maintain plugin purge-puppet --remove-all-data", "puppet module install puppetlabs-ntp -i /etc/puppetlabs/code/environments/production/modules", "Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules Notice: Created target directory /etc/puppetlabs/code/environments/production/modules Notice: Downloading from https://forgeapi.puppet.com Notice: Installing -- do not interrupt /etc/puppetlabs/code/environments/production/modules |-| puppetlabs-ntp (v8.3.0) |-- puppetlabs-stdlib (v4.25.1) [/etc/puppetlabs/code/environments/production/modules]", "puppet config print modulepath", "/etc/puppetlabs/code/environments/production/modules:/etc/puppetlabs/code/environments/common:/etc/puppetlabs/code/modules:/opt/puppetlabs/puppet/modules:/usr/share/puppet/modules", "puppet module upgrade module name", "Successfully updated environments and Puppet classes from the on-disk Puppet installation", "[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]", "- 0.de.pool.ntp.org - 1.de.pool.ntp.org - 2.de.pool.ntp.org - 3.de.pool.ntp.org", "--- parameters: // shortened YAML output classes: ntp: servers: '[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]' environment: production", "puppet agent -t", "cat /etc/ntp.conf", "ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/managing_configurations_using_puppet_integration/index
Chapter 20. consistency
Chapter 20. consistency This chapter describes the commands under the consistency command. 20.1. consistency group add volume Add volume(s) to consistency group Usage: Table 20.1. Positional arguments Value Summary <consistency-group> Consistency group to contain <volume> (name or id) <volume> Volume(s) to add to <consistency-group> (name or id) (repeat option to add multiple volumes) Table 20.2. Command arguments Value Summary -h, --help Show this help message and exit 20.2. consistency group create Create new consistency group. Usage: Table 20.3. Positional arguments Value Summary <name> Name of new consistency group (default to none) Table 20.4. Command arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type of this consistency group (name or id) --consistency-group-source <consistency-group> Existing consistency group (name or id) --consistency-group-snapshot <consistency-group-snapshot> Existing consistency group snapshot (name or id) --description <description> Description of this consistency group --availability-zone <availability-zone> Availability zone for this consistency group (not available if creating consistency group from source) Table 20.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.3. consistency group delete Delete consistency group(s). Usage: Table 20.9. Positional arguments Value Summary <consistency-group> Consistency group(s) to delete (name or id) Table 20.10. Command arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 20.4. consistency group list List consistency groups. Usage: Table 20.11. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show details for all projects. admin only. (defaults to False) --long List additional fields in output Table 20.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 20.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 20.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.5. consistency group remove volume Remove volume(s) from consistency group Usage: Table 20.16. Positional arguments Value Summary <consistency-group> Consistency group containing <volume> (name or id) <volume> Volume(s) to remove from <consistency-group> (name or ID) (repeat option to remove multiple volumes) Table 20.17. Command arguments Value Summary -h, --help Show this help message and exit 20.6. consistency group set Set consistency group properties Usage: Table 20.18. Positional arguments Value Summary <consistency-group> Consistency group to modify (name or id) Table 20.19. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New consistency group name --description <description> New consistency group description 20.7. consistency group show Display consistency group details. Usage: Table 20.20. Positional arguments Value Summary <consistency-group> Consistency group to display (name or id) Table 20.21. Command arguments Value Summary -h, --help Show this help message and exit Table 20.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.8. consistency group snapshot create Create new consistency group snapshot. Usage: Table 20.26. Positional arguments Value Summary <snapshot-name> Name of new consistency group snapshot (default to None) Table 20.27. Command arguments Value Summary -h, --help Show this help message and exit --consistency-group <consistency-group> Consistency group to snapshot (name or id) (default to be the same as <snapshot-name>) --description <description> Description of this consistency group snapshot Table 20.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.9. consistency group snapshot delete Delete consistency group snapshot(s). Usage: Table 20.32. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot(s) to delete (name or id) Table 20.33. Command arguments Value Summary -h, --help Show this help message and exit 20.10. consistency group snapshot list List consistency group snapshots. Usage: Table 20.34. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show detail for all projects (admin only) (defaults to False) --long List additional fields in output --status <status> Filters results by a status ("available", "error", "creating", "deleting" or "error_deleting") --consistency-group <consistency-group> Filters results by a consistency group (name or id) Table 20.35. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 20.36. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 20.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.11. consistency group snapshot show Display consistency group snapshot details Usage: Table 20.39. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot to display (name or id) Table 20.40. Command arguments Value Summary -h, --help Show this help message and exit Table 20.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack consistency group add volume [-h] <consistency-group> <volume> [<volume> ...]", "openstack consistency group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] (--volume-type <volume-type> | --consistency-group-source <consistency-group> | --consistency-group-snapshot <consistency-group-snapshot>) [--description <description>] [--availability-zone <availability-zone>] [<name>]", "openstack consistency group delete [-h] [--force] <consistency-group> [<consistency-group> ...]", "openstack consistency group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long]", "openstack consistency group remove volume [-h] <consistency-group> <volume> [<volume> ...]", "openstack consistency group set [-h] [--name <name>] [--description <description>] <consistency-group>", "openstack consistency group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group>", "openstack consistency group snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consistency-group <consistency-group>] [--description <description>] [<snapshot-name>]", "openstack consistency group snapshot delete [-h] <consistency-group-snapshot> [<consistency-group-snapshot> ...]", "openstack consistency group snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long] [--status <status>] [--consistency-group <consistency-group>]", "openstack consistency group snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group-snapshot>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/consistency
Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode
Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploying OpenShift Data Foundation on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.11 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Note Use of Vault namespaces are not supported with the Kubernetes authentication method in OpenShift Data Foundation 4.11. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that OpenShift Data Foundation is successfully installed, see Verifying your OpenShift Data Foundation installation . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.5.4. Verifying that the OpenShift Data Foundation specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.6. Uninstalling OpenShift Data Foundation 2.6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploying_openshift_data_foundation_on_red_hat_openstack_platform_in_internal_mode
Chapter 1. Using schema properties to configure custom resources
Chapter 1. Using schema properties to configure custom resources Custom resources offer a flexible way to manage and fine-tune the operation of AMQ Streams components using configuration properties. This reference guide describes common configuration properties that apply to multiple custom resources, as well as the configuration properties available for each custom resource schema available with AMQ Streams. Where appropriate, expanded descriptions of properties and examples of how they are configured are provided. The properties defined for each schema provide a structured and organized way to specify configuration for the custom resources. Whether it's adjusting resource allocation or specifying access controls, the properties in the schemas allow for a granular level of configuration. For example, you can use the properties of the KafkaClusterSpec schema to specify the type of storage for a Kafka cluster or add listeners that provide secure access to Kafka brokers. Some property options within a schema may be constrained, as indicated in the property descriptions. These constraints define specific options or limitations on the values that can be assigned to those properties. Constraints ensure that the custom resources are configured with valid and appropriate values.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/con-schema-reference-intro-reference
Windows Container Support for OpenShift
Windows Container Support for OpenShift OpenShift Container Platform 4.14 Red Hat OpenShift for Windows Containers Guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/windows_container_support_for_openshift/index
Chapter 17. Network [config.openshift.io/v1]
Chapter 17. Network [config.openshift.io/v1] Description Network holds cluster-wide information about Network. The canonical name is cluster . It is used to configure the desired network configuration, such as: IP address pools for services/pod IPs, network plugin, etc. Please view network.spec for an explanation on what applies when configuring this resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 17.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. status object status holds observed values from the cluster. They may not be overridden. 17.1.1. .spec Description spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. This field is immutable after installation. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. externalIP object externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. networkType string NetworkType is the plugin that is to be deployed (e.g. OpenShiftSDN). This should match a value that the cluster-network-operator understands, or else no networking will be installed. Currently supported values are: - OpenShiftSDN This field is immutable after installation. serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. This field is immutable after installation. serviceNodePortRange string The port range allowed for Services of type NodePort. If not specified, the default of 30000-32767 will be used. Such Services without a NodePort specified will have one automatically allocated from this range. This parameter can be updated after the cluster is installed. 17.1.2. .spec.clusterNetwork Description IP address pool to use for pod IPs. This field is immutable after installation. Type array 17.1.3. .spec.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.4. .spec.externalIP Description externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. Type object Property Type Description autoAssignCIDRs array (string) autoAssignCIDRs is a list of CIDRs from which to automatically assign Service.ExternalIP. These are assigned when the service is of type LoadBalancer. In general, this is only useful for bare-metal clusters. In Openshift 3.x, this was misleadingly called "IngressIPs". Automatically assigned External IPs are not affected by any ExternalIPPolicy rules. Currently, only one entry may be provided. policy object policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. 17.1.5. .spec.externalIP.policy Description policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. Type object Property Type Description allowedCIDRs array (string) allowedCIDRs is the list of allowed CIDRs. rejectedCIDRs array (string) rejectedCIDRs is the list of disallowed CIDRs. These take precedence over allowedCIDRs. 17.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. clusterNetworkMTU integer ClusterNetworkMTU is the MTU for inter-pod networking. migration object Migration contains the cluster network migration configuration. networkType string NetworkType is the plugin that is deployed (e.g. OpenShiftSDN). serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. 17.1.7. .status.clusterNetwork Description IP address pool to use for pod IPs. Type array 17.1.8. .status.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 17.1.9. .status.migration Description Migration contains the cluster network migration configuration. Type object Property Type Description mtu object MTU contains the MTU migration configuration. networkType string NetworkType is the target plugin that is to be deployed. Currently supported values are: OpenShiftSDN, OVNKubernetes 17.1.10. .status.migration.mtu Description MTU contains the MTU migration configuration. Type object Property Type Description machine object Machine contains MTU migration configuration for the machine's uplink. network object Network contains MTU migration configuration for the default network. 17.1.11. .status.migration.mtu.machine Description Machine contains MTU migration configuration for the machine's uplink. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.1.12. .status.migration.mtu.network Description Network contains MTU migration configuration for the default network. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 17.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/config.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 17.2.1. /apis/config.openshift.io/v1/networks Table 17.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Network Table 17.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 17.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 17.5. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 17.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.7. Body parameters Parameter Type Description body Network schema Table 17.8. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 17.2.2. /apis/config.openshift.io/v1/networks/{name} Table 17.9. Global path parameters Parameter Type Description name string name of the Network Table 17.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Network Table 17.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 17.12. Body parameters Parameter Type Description body DeleteOptions schema Table 17.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 17.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 17.15. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 17.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 17.17. Body parameters Parameter Type Description body Patch schema Table 17.18. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 17.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 17.20. Body parameters Parameter Type Description body Network schema Table 17.21. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/network-config-openshift-io-v1
Chapter 4. Configuring Ansible automation controller on OpenShift Container Platform
Chapter 4. Configuring Ansible automation controller on OpenShift Container Platform During a Kubernetes upgrade, automation controller must be running. 4.1. Minimizing downtime during OpenShift Container Platform upgrade Make the following configuration changes in automation controller to minimize downtime during the upgrade. Prerequisites Ansible Automation Platform 2.4 Ansible automation controller 4.4 OpenShift Container Platform > 4.10.42 > 4.11.16 > 4.12.0 High availability (HA) deployment of Postgres Multiple worker node that automation controller pods can be scheduled on Procedure Enable RECEPTOR_KUBE_SUPPORT_RECONNECT in AutomationController specification: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: ... spec: ... ee_extra_env: | - name: RECEPTOR_KUBE_SUPPORT_RECONNECT value: enabled ``` Enable the graceful termination feature in AutomationController specification: termination_grace_period_seconds: <time to wait for job to finish> Configure podAntiAffinity for web and task the pod to spread out the deployment in AutomationController specification: task_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-task topologyKey: topology.kubernetes.io/zone weight: 100 web_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-web topologyKey: topology.kubernetes.io/zone weight: 100 Configure PodDisruptionBudget in OpenShift Container Platform: --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-job-pods spec: maxUnavailable: 0 selector: matchExpressions: - key: ansible-awx-job-id operator: Exists --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-web-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-web --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-task-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-task
[ "apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: spec: ee_extra_env: | - name: RECEPTOR_KUBE_SUPPORT_RECONNECT value: enabled ```", "termination_grace_period_seconds: <time to wait for job to finish>", "task_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-task topologyKey: topology.kubernetes.io/zone weight: 100 web_affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - awx-web topologyKey: topology.kubernetes.io/zone weight: 100", "--- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-job-pods spec: maxUnavailable: 0 selector: matchExpressions: - key: ansible-awx-job-id operator: Exists --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-web-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-web --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: automationcontroller-task-pods spec: minAvailable: 1 selector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - <automationcontroller_instance_name>-task" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/assembly-configure-controller-ocp
probe::tty.receive
probe::tty.receive Name probe::tty.receive - called when a tty receives a message Synopsis tty.receive Values driver_name the driver name count The amount of characters received index The tty Index cp the buffer that was received id the tty id name the name of the module file fp The flag buffer
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-receive
Chapter 4. Performance impact of alt-java
Chapter 4. Performance impact of alt-java The alt-java binary contains the SSB mitigation, so the SSB mitigation performance impact no longer exists on java . Note Using alt-java might significantly reduce the performance of Java programs. You can find detailed information of some Java performance issues that might exist with using alt-java by selecting any of the Red Hat Bugzilla links listed in the Additional resources section. Additional resources (java-11-openjdk) Seccomp related performance regression in RHEL8 . (java-1.8.0-openjdk) Seccomp related performance regression in RHEL8 . CVE-2018-3639 Detail . CVE-2018-3639 hw: cpu: speculative store bypass . CVE-2018-3639 java-1.8.0-openjdk: hw: cpu: speculative store bypass (rhel-7.6) Revised on 2024-05-03 15:35:17 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_alt-java/altjava-performance-impact
23.15. Timekeeping
23.15. Timekeeping The guest virtual machine clock is typically initialized from the host physical machine clock. Most operating systems expect the hardware clock to be kept in UTC, which is the default setting. Accurate timekeeping on guest virtual machines is a key challenge for virtualization platforms. Different hypervisors attempt to handle the problem of timekeeping in a variety of ways. libvirt provides hypervisor-independent configuration settings for time management, using the <clock> and <timer> elements in the domain XML. The domain XML can be edited using the virsh edit command. For details, see Section 20.22, "Editing a Guest Virtual Machine's XML Configuration Settings" . ... <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup' track='guest'> <catchup threshold='123' slew='120' limit='10000'/> </timer> <timer name='pit' tickpolicy='delay'/> </clock> ... Figure 23.25. Timekeeping The components of this section of the domain XML are as follows: Table 23.11. Timekeeping elements State Description <clock> The <clock> element is used to determine how the guest virtual machine clock is synchronized with the host physical machine clock. The offset attribute takes four possible values, allowing for fine grained control over how the guest virtual machine clock is synchronized to the host physical machine. Note that hypervisors are not required to support all policies across all time sources utc - Synchronizes the clock to UTC when booted. utc mode can be converted to variable mode, which can be controlled by using the adjustment attribute. If the value is reset , the conversion is not done. A numeric value forces the conversion to variable mode using the value as the initial adjustment. The default adjustment is hypervisor-specific. localtime - Synchronizes the guest virtual machine clock with the host physical machine's configured timezone when booted. The adjustment attribute behaves the same as in utc mode. timezone - Synchronizes the guest virtual machine clock to the requested time zone. variable - Gives the guest virtual machine clock an arbitrary offset applied relative to UTC or localtime , depending on the basis attribute. The delta relative to UTC (or localtime ) is specified in seconds, using the adjustment attribute. The guest virtual machine is free to adjust the RTC over time and expect that it will be honored at reboot. This is in contrast to utc and localtime mode (with the optional attribute adjustment='reset' ), where the RTC adjustments are lost at each reboot. In addition, the basis attribute can be either utc (default) or localtime . The clock element may have zero or more <timer> elements. <timer> See Note <present> Specifies whether a particular timer is available to the guest virtual machine. Can be set to yes or no . Note A <clock> element can have zero or more <timer> elements as children. The <timer> element specifies a time source used for guest virtual machine clock synchronization. In each <timer> element only the name is required, and all other attributes are optional: name - Selects which timer is being modified. The following values are acceptable: kvmclock , pit , or rtc . track - Specifies the timer track. The following values are acceptable: boot , guest , or wall . track is only valid for name="rtc" . tickpolicy - Determines what happens when the deadline for injecting a tick to the guest virtual machine is missed. The following values can be assigned: delay - Continues to deliver ticks at the normal rate. The guest virtual machine time will be delayed due to the late tick. catchup - Delivers ticks at a higher rate in order to catch up with the missed tick. The guest virtual machine time is not displayed once catch up is complete. In addition, there can be three optional attributes, each a positive integer: threshold, slew, and limit. merge - Merges the missed tick(s) into one tick and injects them. The guest virtual machine time may be delayed, depending on how the merge is done. discard - Throws away the missed tick(s) and continues with future injection at its default interval setting. The guest virtual machine time may be delayed, unless there is an explicit statement for handling lost ticks. Note The value utc is set as the clock offset in a virtual machine by default. However, if the guest virtual machine clock is run with the localtime value, the clock offset needs to be changed to a different value in order to have the guest virtual machine clock synchronized with the host physical machine clock. Example 23.1. Always synchronize to UTC Example 23.2. Always synchronize to the host physical machine timezone Example 23.3. Synchronize to an arbitrary time zone Example 23.4. Synchronize to UTC + arbitrary offset
[ "<clock offset='localtime'> <timer name='rtc' tickpolicy='catchup' track='guest'> <catchup threshold='123' slew='120' limit='10000'/> </timer> <timer name='pit' tickpolicy='delay'/> </clock>", "<clock offset=\"utc\" />", "<clock offset=\"localtime\" />", "<clock offset=\"timezone\" timezone=\"Europe/Paris\" />", "<clock offset=\"variable\" adjustment=\"123456\" />" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Time_keeping
Chapter 6. Clustering
Chapter 6. Clustering clufter rebased to version 0.76.0 and fully supported The clufter packages provide a tool for transforming and analyzing cluster configuration formats. They can be used to assist with migration from an older stack configuration to a newer configuration that leverages Pacemaker. The clufter tool, previously available as a Technology Preview, is now fully supported. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. For examples of clufter usage, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2810031 . The clufter packages have been upgraded to upstream version 0.76.0, which provides a number of bug fixes and new features. Among the notable updates are the following: When converting either CMAN + RGManager stack specific configuration into the respective Pacemaker configuration (or sequence of pcs commands) with the ccs2pcs* families of commands, the clufter tool no longer refuses to convert entirely valid lvm resource agent configuration. When converting CMAN-based configuration into the analogous configuration for a Pacemaker stack with the ccs2pcs family of commands, some resources related configuration bits previously lost in processing (such as maximum number of failures before returning a failure to a status check ) are now propagated correctly. When producing pcs commands with the cib2pcs and pcs2pcscmd families of clufter commands, proper finalized syntax is now used for the alert handlers definitions, for which the (default) behavior of single-step push of the configuration changes is now respected. When producing pcs commands, the clufter tool now supports a preferred ability to generate pcs commands that will update only the modifications made to a configuration by means of a differential update rather than a pushing a wholesale update of the entire configuration. Likewise when applicable, the clufter tool now supports instructing the pcs tool to configure user permissions (ACLs). For this to work across the instances of various major versions of the document schemas, clufter gained the notion of internal on-demand format upgrades, mirroring the internal mechanics of pacemaker . Similarly, clufter is now capable of configuring the bundle feature. In any script-like output sequence such as that produced by the ccs2pcscmd and pcs2pcscmd families of clufter commands, the intended shell interpreter is now emitted as a first, commented line as also understood directly by the operating system in order to clarify where Bash rather than a mere POSIX shell is expected. This might have been misleading under some circumstances in the past. The Bash completion file for clufter no longer fails to work properly when the = character is to specify an option's value in the sequence being completed. The clufter tool now properly detects interactive use on a terminal so as to offer more convenient representation of the outputs, and also provides better diagnostics for some previously neglected error conditions. (BZ# 1387424 , BZ# 1381522 , BZ# 1440876 , BZ# 1381531 , BZ# 1381565 ) Support for quorum devices in a Pacemaker cluster Red Hat Enterprise Linux 7.4 provides full support for quorum devices, previously available as a Technology Preview. This feature provides the ability to configure a separate quorum device (QDevice) which acts as a third-party arbitration device for the cluster. Its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. A quorum device is recommended for clusters with an even number of nodes and highly recommended for two-node clusters. For information on configuring a quorum device, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ . (BZ#1158805) Support for Booth cluster ticket manager Red Hat Enterprise Linux 7.4 provides full support for a Booth cluster ticket manager. This feature, previously available as a Technology Preview, allows you to configure multiple high availability clusters in separate sites that communicate through a distributed service to coordinate management of resources. The Booth ticket manager facilitates a consensus-based decision process for individual tickets that ensure that specified resources are run at only one site at a time, for which a ticket has been granted. For information on configuring multi-site clusters with the Booth ticket manager, see the https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ (BZ# 1302087 , BZ#1305049) Support added for using shared storage with the SBD daemon Red Hat Enterprise Linux 7.4 provides support for using the SBD (Storage-Based Death) daemon with a shared block device. This allows you to enable fencing by means of a shared block-device in addition to fencing by means of a watchdog device, which had previously been supported. The fence-agents package now provides the fence_sbd fence agent which is needed to trigger and control the actual fencing by means of an RHCS-style fence agent. SBD is not supported on Pacemaker remote nodes. (BZ# 1413951 ) Full support for CTDB resource agent The CTDB resource agent used to implement a Samba deployment is now supported in Red Hat Enterprise Linux. (BZ# 1077888 ) The High Availability and Resilient Storage Add-Ons are now available for IBM POWER, little endian Red Hat Enterprise Linux 7.4 adds support for the High Availability and Resilient Storage Add-Ons for the IBM POWER, little endian architecture. Note that this support is provided only for cluster nodes running on PowerVM on POWER8 servers. (BZ# 1289662 , BZ# 1426651 ) pcs now provides the ability to set up a cluster with encrypted corosync communication The pcs cluster setup command now supports a new --encryption flag that controls the setting of corosync encryption in a cluster. This allows users to set up a cluster with encrypted corosync communication in a not entirely trusted environment. (BZ# 1165821 ) New commands for supporting and removing remote and guest nodes Red Hat Enterprise Linux 7.4 provides the following new commands for creating and removing remote and guest nodes: pcs cluster node add-guest pcs cluster node remove-guest pcs cluster node add-remote pcs cluster node remove-remote These commands replace the pcs cluster remote-node add and pcs cluster remote-node remove commands, which have been deprecated. (BZ# 1176018 , BZ# 1386512 ) Ability to configure pcsd bind addresses You can now configure pcsd bind addresses in the /etc/sysconfig/pcsd file. In releases, pcsd could bind to all interfaces, a situation that is not suitable for some users. By default, pcsd binds to all interfaces. (BZ# 1373614 ) New option to the pcs resource unmanage command to disable monitor operations Even when a resource is in unmanaged mode, monitor operations are still run by the cluster. That may cause the cluster to report errors the user is not interested in as those errors may be expected for a particular use case when the resource is unmanaged. The pcs resource unmanage command now supports the --monitor option, which disables monitor operations when putting a resource into unmanaged mode. Additionally, the pcs resource manage command also supports the --monitor option, which enables the monitor operations when putting a resource back into managed mode. (BZ# 1303969 ) Support for regular expressions in pcs command line when configuring location constraints pcs now supports regular expressions in location constraints on the command line. These constraints apply to multiple resources based on the regular expression matching resource name. This simplifies cluster management as one constraint may be used where several were needed before. (BZ# 1362493 ) Specifying nodes in fencing topology by a regular expression or a node attribute and its value It is now possible to specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and node3 to use fence devices apc1 and apc2 , and nodes node4 , node5 , and node6 to use fence devices apc3 and apc4 . The following commands yield the same results by using node attribute matching. (BZ# 1261116 ) Support for Oracle 11g for the resource agents Oracle and OraLsnr Red Hat Enterprise Linux 7.4 provides support for Oracle Database 11g for the Oracle and OraLsnr resource agents used with Pacemaker. (BZ#1336847) Support for using SBD with shared storage Support has been added for configured SBD (Storage-Based Death) with shared storage using the pcs commands. For information on SBD fending, see https://access.redhat.com/articles/2943361 . (BZ# 1413958 ) Support for NodeUtilization resource agent Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent. The NodeUtilization agent can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. You can run the agent as a clone resource to have it automatically populate these parameters on each node. For information on the NodeUtilization resource agent and the resource options for this agent, run the pcs resource describe NodeUtilization command. For information on utilization and placement strategy in Pacemaker, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/s1-utilization-HAAR.html . (BZ#1430304)
[ "pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4", "pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_clustering
Chapter 13. Triggering Scripts for Cluster Events
Chapter 13. Triggering Scripts for Cluster Events A Pacemaker cluster is an event-driven system, where an event might be a resource or node failure, a configuration change, or a resource starting or stopping. You can configure Pacemaker cluster alerts to take some external action when a cluster event occurs. You can configure cluster alerts in one of two ways: As of Red Hat Enterprise Linux 7.3, you can configure Pacemaker alerts by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. This is the preferred, simpler method of configuring cluster alerts. Pacemaker alert agents are described in Section 13.1, "Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later)" . The ocf:pacemaker:ClusterMon resource can monitor the cluster status and trigger alerts on each cluster event. This resource runs the crm_mon command in the background at regular intervals. For information on the ClusterMon resource see Section 13.2, "Event Notification with Monitoring Resources" . 13.1. Pacemaker Alert Agents (Red Hat Enterprise Linux 7.3 and later) You can create Pacemaker alert agents to take some external action when a cluster event occurs. The cluster passes information about the event to the agent by means of environment variables. Agents can do anything with this information, such as send an email message or log to a file or update a monitoring system. Pacemaker provides several sample alert agents, which are installed in /usr/share/pacemaker/alerts by default. These sample scripts may be copied and used as is, or they may be used as templates to be edited to suit your purposes. Refer to the source code of the sample agents for the full set of attributes they support. See Section 13.1.1, "Using the Sample Alert Agents" for an example of a basic procedure for configuring an alert that uses a sample alert agent. General information on configuring and administering alert agents is provided in Section 13.1.2, "Alert Creation" , Section 13.1.3, "Displaying, Modifying, and Removing Alerts" , Section 13.1.4, "Alert Recipients" , Section 13.1.5, "Alert Meta Options" , and Section 13.1.6, "Alert Configuration Command Examples" . You can write your own alert agents for a Pacemaker alert to call. For information on writing alert agents, see Section 13.1.7, "Writing an Alert Agent" . 13.1.1. Using the Sample Alert Agents When you use one of the sample alert agents, you should review the script to ensure that it suits your needs. These sample agents are provided as a starting point for custom scripts for specific cluster environments. Note that while Red Hat supports the interfaces that the alert agents scripts use to communicate with Pacemaker, Red Hat does not provide support for the custom agents themselves. To use one of the sample alert agents, you must install the agent on each node in the cluster. For example, the following command installs the alert_file.sh.sample script as alert_file.sh . After you have installed the script, you can create an alert that uses the script. The following example configures an alert that uses the installed alert_file.sh alert agent to log events to a file. Alert agents run as the user hacluster , which has a minimal set of permissions. This example creates the log file pcmk_alert_file.log that will be used to record the events. It then creates the alert agent and adds the path to the log file as its recipient. The following example installs the alert_snmp.sh.sample script as alert_snmp.sh and configures an alert that uses the installed alert_snmp.sh alert agent to send cluster events as SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP server. This example configures the timestamp format as a meta option. For information about meta options, see Section 13.1.5, "Alert Meta Options" . After configuring the alert, this example configures a recipient for the alert and displays the alert configuration. The following example installs the alert_smtp.sh agent and then configures an alert that uses the installed alert agent to send cluster events as email messages. After configuring the alert, this example configures a recipient and displays the alert configuration. For more information on the format of the pcs alert create and pcs alert recipient add commands, see Section 13.1.2, "Alert Creation" and Section 13.1.4, "Alert Recipients" . 13.1.2. Alert Creation The following command creates a cluster alert. The options that you configure are agent-specific configuration values that are passed to the alert agent script at the path you specify as additional environment variables. If you do not specify a value for id , one will be generated. For information on alert meta options, Section 13.1.5, "Alert Meta Options" . Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called on those nodes. The following example creates a simple alert that will call myscript.sh for each event. For an example that shows how to create a cluster alert that uses one of the sample alert agents, see Section 13.1.1, "Using the Sample Alert Agents" . 13.1.3. Displaying, Modifying, and Removing Alerts The following command shows all configured alerts along with the values of the configured options. The following command updates an existing alert with the specified alert-id value. The following command removes an alert with the specified alert-id value. Alternately, you can run the pcs alert delete command, which is identical to the pcs alert remove command. Both the pcs alert delete and the pcs alert remove commands allow you to specify more than one alert to be deleted. 13.1.4. Alert Recipients Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. The recipient may be anything the alert agent can recognize: an IP address, an email address, a file name, or whatever the particular agent supports. The following command adds a new recipient to the specified alert. The following command updates an existing alert recipient. The following command removes the specified alert recipient. Alternately, you can run the pcs alert recipient delete command, which is identical to the pcs alert recipient remove command. Both the pcs alert recipient remove and the pcs alert recipient delete commands allow you to remove more than one alert recipient. The following example command adds the alert recipient my-alert-recipient with a recipient ID of my-recipient-id to the alert my-alert . This will configure the cluster to call the alert script that has been configured for my-alert for each event, passing the recipient some-address as an environment variable. 13.1.5. Alert Meta Options As with resource agents, meta options can be configured for alert agents to affect how Pacemaker calls them. Table 13.1, "Alert Meta Options" describes the alert meta options. Meta options can be configured per alert agent as well as per recipient. Table 13.1. Alert Meta Options Meta-Attribute Default Description timestamp-format %H:%M:%S.%06N Format the cluster will use when sending the event's timestamp to the agent. This is a string as used with the date (1) command. timeout 30s If the alert agent does not complete within this amount of time, it will be terminated. The following example configures an alert that calls the script myscript.sh and then adds two recipients to the alert. The first recipient has an ID of my-alert-recipient1 and the second recipient has an ID of my-alert-recipient2 . The script will get called twice for each event, with each call using a 15-second timeout. One call will be passed to the recipient [email protected] with a timestamp in the format %D %H:%M, while the other call will be passed to the recipient [email protected] with a timestamp in the format %c. 13.1.6. Alert Configuration Command Examples The following sequential examples show some basic alert configuration commands to show the format to use to create alerts, add recipients, and display the configured alerts. Note that while you must install the alert agents themselves on each node in a cluster, you need to run the `pcs` commands only once. The following commands create a simple alert, add two recipients to the alert, and display the configured values. Since no alert ID value is specified, the system creates an alert ID value of alert . The first recipient creation command specifies a recipient of rec_value . Since this command does not specify a recipient ID, the value of alert-recipient is used as the recipient ID. The second recipient creation command specifies a recipient of rec_value2 . This command specifies a recipient ID of my-recipient for the recipient. This following commands add a second alert and a recipient for that alert. The alert ID for the second alert is my-alert and the recipient value is my-other-recipient . Since no recipient ID is specified, the system provides a recipient id of my-alert-recipient . The following commands modify the alert values for the alert my-alert and for the recipient my-alert-recipient . The following command removes the recipient my-alert-recipient from alert . The following command removes myalert from the configuration. 13.1.7. Writing an Alert Agent There are three types of Pacemaker alerts: node alerts, fencing alerts, and resource alerts. The environment variables that are passed to the alert agents can differ, depending on the type of alert. Table 13.2, "Environment Variables Passed to Alert Agents" describes the environment variables that are passed to alert agents and specifies when the environment variable is associated with a specific alert type. Table 13.2. Environment Variables Passed to Alert Agents Environment Variable Description CRM_alert_kind The type of alert (node, fencing, or resource) CRM_alert_version The version of Pacemaker sending the alert CRM_alert_recipient The configured recipient CRM_alert_node_sequence A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts have been issued by Pacemaker. An alert for an event that happened later in time reliably has a higher sequence number than alerts for earlier events. Be aware that this number has no cluster-wide meaning. CRM_alert_timestamp A timestamp created prior to executing the agent, in the format specified by the timestamp-format meta option. This allows the agent to have a reliable, high-precision time of when the event occurred, regardless of when the agent itself was invoked (which could potentially be delayed due to system load or other circumstances). CRM_alert_node Name of affected node CRM_alert_desc Detail about event. For node alerts, this is the node's current state (member or lost). For fencing alerts, this is a summary of the requested fencing operation, including origin, target, and fencing operation error code, if any. For resource alerts, this is a readable string equivalent of CRM_alert_status . CRM_alert_nodeid ID of node whose status changed (provided with node alerts only) CRM_alert_task The requested fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rc The numerical return code of the fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rsc The name of the affected resource (resource alerts only) CRM_alert_interval The interval of the resource operation (resource alerts only) CRM_alert_target_rc The expected numerical return code of the operation (resource alerts only) CRM_alert_status A numerical code used by Pacemaker to represent the operation result (resource alerts only) When writing an alert agent, you must take the following concerns into account. Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. Users may modify the configuration in stages, and add a recipient later. If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queueing resource-intensive actions into some other instance, instead of directly executing them. Alert agents are run as the hacluster user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure sudo to allow the agent to run the necessary commands as another user with the appropriate privileges. Take care to validate and sanitize user-configured parameters, such as CRM_alert_timestamp (whose content is specified by the user-configured timestamp-format ), CRM_alert_recipient , and all alert options. This is necessary to protect against configuration errors. In addition, if some user can modify the CIB without having hacluster -level access to the cluster nodes, this is a potential security concern as well, and you should avoid the possibility of code injection. If a cluster contains resources with operations for which the on-fail parameter is set to fence , there will be multiple fence notifications on failure, one for each resource for which this parameter is set plus one additional notification. Both the STONITH daemon and the crmd daemon will send notifications. Pacemaker performs only one actual fence operation in this case, however, no matter how many notifications are sent. Note The alerts interface is designed to be backward compatible with the external scripts interface used by the ocf:pacemaker:ClusterMon resource. To preserve this compatibility, the environment variables passed to alert agents are available prepended with CRM_notify_ as well as CRM_alert_ . One break in compatibility is that the ClusterMon resource ran external scripts as the root user, while alert agents are run as the hacluster user. For information on configuring scripts that are triggered by the ClusterMon , see Section 13.2, "Event Notification with Monitoring Resources" .
[ "install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample /var/lib/pacemaker/alert_file.sh", "touch /var/log/pcmk_alert_file.log chown hacluster:haclient /var/log/pcmk_alert_file.log chmod 600 /var/log/pcmk_alert_file.log pcs alert create id=alert_file description=\"Log events to a file.\" path=/var/lib/pacemaker/alert_file.sh pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log", "install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" pcs alert recipient add snmp_alert value=192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)", "install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])", "pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert create id=my_alert path=/path/to/myscript.sh", "pcs alert [config|show]", "pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert remove alert-id", "pcs alert recipient add alert-id value= recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]", "pcs alert recipient remove recipient-id", "pcs alert recipient add my-alert value=my-alert-recipient id=my-recipient-id options value=some-address", "pcs alert create id=my-alert path=/path/to/myscript.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=\"%D %H:%M\" pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=%c", "pcs alert create path=/my/path pcs alert recipient add alert value=rec_value pcs alert recipient add alert value=rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)", "pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta timeout=50s timestamp-format=\"%H%B%S\" pcs alert recipient add my-alert value=my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: timestamp-format=%H%B%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient)", "pcs alert update my-alert options option1=newvalue1 meta timestamp-format=\"%H%M%S\" pcs alert recipient update my-alert-recipient options option1=new meta timeout=60s pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=%H%M%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s", "pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Meta options: timestamp-format=\"%M%B%S\" timeout=50s Meta options: m=newval meta-option1=2 Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s", "pcs alert remove my-alert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-alertscripts-haar
2.9. Procedural Language
2.9. Procedural Language 2.9.1. Procedural Language JBoss Data Virtualization supports a procedural language for defining virtual procedures. These are similar to stored procedures in relational database management systems. You can use this language to define the transformation logic for decomposing INSERT, UPDATE, and DELETE commands against views; these are known as update procedures. See Section 2.10.1, "Virtual Procedures" and Section 2.10.6, "Update Procedures" for more information. 2.9.2. Command Statement A command statement executes a DML command, DDL command or dynamic SQL against one or more data sources. See Section 2.5.1, "DML Commands" and Section 2.7.1, "DDL Commands" . Usage: Example 2.8. Example Command Statements SELECT * FROM MySchema.MyTable WHERE ColA > 100 WITHOUT RETURN; INSERT INTO MySchema.MyTable (ColA,ColB) VALUES (50, 'hi'); EXECUTE commands may access IN/OUT, OUT, and RETURN parameters. To access the return value the statement will have the form var = EXEC proc... . To access OUT or IN/OUT values named parameter syntax must be used. For example, EXEC proc(in_param=>'1', out_param=>var) will assign the value of the out parameter to the variable var. It is expected that the data type of parameter will be implicitly convertible to the data type of the variable. The RETURN clause determines if the result of the command is returnable from the procedure. WITH RETURN is the default. If the command does not return a result set or the procedure does not return a result set, the RETURN clause is ignored. If WITH RETURN is specified, the result set of the command must match the expected result set of the procedure. Only the last successfully executed statement executed WITH RETURN will be returned as the procedure result set. If there are no returnable result sets and the procedure declares that a result set will be returned, then an empty result set is returned. 2.9.3. Dynamic SQL Dynamic SQL allows for the execution of an arbitrary SQL command in a virtual procedure. Dynamic SQL is useful in situations where the exact command form is not known prior to execution. Usage: EXECUTE IMMEDIATE <expression> [ AS <variable> <type> [, <variable> <type>]* [INTO <variable>] ] [USING <variable>=<expression> [,<variable>=<expression>]*] [UPDATE <literal>] Syntax Rules: The "AS" clause is used to define the projected symbols names and types returned by the executed SQL string. The "AS" clause symbols will be matched positionally with the symbols returned by the executed SQL string. Non-convertible types or too few columns returned by the executed SQL string will result in an error. The "INTO" clause will project the dynamic SQL into the specified temp table. With the "INTO" clause specified, the dynamic command will actually execute a statement that behaves like an INSERT with a QUERY EXPRESSION. If the dynamic SQL command creates a temporary table with the "INTO" clause, then the "AS" clause is required to define the table's metadata. The "USING" clause allows the dynamic SQL string to contain variable references that are bound at runtime to specified values. This allows for some independence of the SQL string from the surrounding procedure variable names and input names. In the dynamic command "USING" clause, each variable is specified by short name only. However in the dynamic SQL the "USING" variable must be fully qualified to "DVAR.". The "USING" clause is only for values that will be used in the dynamic SQL as legal expressions. It is not possible to use the "USING" clause to replace table names, keywords, etc. This makes using symbols equivalent in power to normal bind (?) expressions in prepared statements. The "USING" clause helps reduce the amount of string manipulation needed. If a reference is made to a USING symbol in the SQL string that is not bound to a value in the "USING" clause, an exception will occur. The "UPDATE" clause is used to specify the updating model count. Accepted values are (0,1,*). 0 is the default value if the clause is not specified. See Section 5.3, "Updating Model Count" . Example 2.9. Example Dynamic SQL ... /* Typically complex criteria would be formed based upon inputs to the procedure. In this simple example the criteria is references the using clause to isolate the SQL string from referencing a value from the procedure directly */ DECLARE string criteria = 'Customer.Accounts.Last = DVARS.LastName'; /* Now we create the desired SQL string */ DECLARE string sql_string = 'SELECT ID, First || '' '' || Last AS Name, Birthdate FROM Customer.Accounts WHERE ' || criteria; /* The execution of the SQL string will create the #temp table with the columns (ID, Name, Birthdate). Note that we also have the USING clause to bind a value to LastName, which is referenced in the criteria. */ EXECUTE IMMEDIATE sql_string AS ID integer, Name string, Birthdate date INTO #temp USING LastName='some name'; /* The temp table can now be used with the values from the Dynamic SQL */ loop on (SELECT ID from #temp) as myCursor ... Here is an example showing a more complex approach to building criteria for the dynamic SQL string. In short, the virtual procedure AccountAccess.GetAccounts has inputs ID, LastName, and bday. If a value is specified for ID it will be the only value used in the dynamic SQL criteria. Otherwise if a value is specified for LastName the procedure will detect if the value is a search string. If bday is specified in addition to LastName, it will be used to form compound criteria with LastName. Example 2.10. Example Dynamic SQL with USING clause and dynamically built criteria string ... DECLARE string crit = null; IF (AccountAccess.GetAccounts.ID IS NOT NULL) crit = '(Customer.Accounts.ID = DVARS.ID)'; ELSE IF (AccountAccess.GetAccounts.LastName IS NOT NULL) BEGIN IF (AccountAccess.GetAccounts.LastName == '%') ERROR "Last name cannot be %"; ELSE IF (LOCATE('%', AccountAccess.GetAccounts.LastName) < 0) crit = '(Customer.Accounts.Last = DVARS.LastName)'; ELSE crit = '(Customer.Accounts.Last LIKE DVARS.LastName)'; IF (AccountAccess.GetAccounts.bday IS NOT NULL) crit = '(' || crit || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))'; END ELSE ERROR "ID or LastName must be specified."; EXECUTE IMMEDIATE 'SELECT ID, First || '' '' || Last AS Name, Birthdate FROM Customer.Accounts WHERE ' || crit USING ID=AccountAccess.GetAccounts.ID, LastName=AccountAccess.GetAccounts.LastName, BirthDay=AccountAccess.GetAccounts.Bday; ... 2.9.4. Dynamic SQL Limitations The use of dynamic SQL command results in an assignment statement requires the use of a temp table. Example 2.11. Example Assignment EXECUTE IMMEDIATE <expression> AS x string INTO #temp; DECLARE string VARIABLES.RESULT = (SELECT x FROM #temp); The construction of appropriate criteria will be cumbersome if parts of the criteria are not present. For example if "criteria" were already NULL, then the following example results in "criteria" remaining NULL. Example 2.12. Example Dangerous NULL handling ... criteria = '(' || criteria || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))'; The preferred approach is for the user to ensure the criteria is not NULL prior its usage. If this is not possible, a good approach is to specify a default as shown in the following example. Example 2.13. Example NULL handling ... criteria = '(' || nvl(criteria, '(1 = 1)') || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))'; If the dynamic SQL is an UPDATE, DELETE, or INSERT command, and the user needs to specify the "AS" clause (which would be the case if the number of rows effected needs to be retrieved). The user will still need to provide a name and type for the return column if the into clause is specified. Example 2.14. Example with AS and INTO clauses /* This name does not need to match the expected update command symbol "count". */ EXECUTE IMMEDIATE <expression> AS x integer INTO #temp; Unless used in other parts of the procedure, tables in the dynamic command will not be seen as sources in Teiid Designer. When using the "AS" clause only the type information will be available to Teiid Designer. Result set columns generated from the "AS" clause then will have a default set of properties for length, precision, etc. 2.9.5. Declaration Statement A declaration statement declares a variable and its type. After you declare a variable, you can use it in that block within the procedure and any sub-blocks. A variable is initialized to null by default, but can also be assigned the value of an expression as part of the declaration statement. Usage: Example Syntax Syntax Rules: You cannot redeclare a variable with a duplicate name in a sub-block The VARIABLES group is always implied even if it is not specified. The assignment value follows the same rules as for an Assignment Statement. In addition to the standard types, you may specify EXCEPTION if declaring an exception variable. 2.9.6. Assignment Statement An assignment statement assigns a value to a variable by evaluating an expression. Usage: Example Syntax 2.9.7. Special Variables The VARIABLES.ROWCOUNT integer variable will contain the numbers of rows affected by the last INSERT / UPDATE / DELETE statement executed. Inserts that are processed by dynamic SQL with an INTO clause also update the ROWCOUNT . ... UPDATE FOO SET X = 1 WHERE Y = 2; DECLARE INTEGER UPDATED = VARIABLES.ROWCOUNT; ... 2.9.8. Compound Statement A compound statement (or block) logically groups a series of statements. Temporary tables and variables created in a compound statement are local only to that block and are destroyed when exiting the block. Usage: Note Where a block is expected by an IF, LOOP, WHILE, etc., a single statement is also accepted by the parser. Even though the block BEGIN/END are not expected, the statement will execute as if wrapped in a BEGIN/END pair. Syntax Rules If NOT ATOMIC or no ATOMIC clause is specified, the block will be executed non-atomically. If the ATOMIC clause is specified, the block must execute atomically. If a transaction is already associated with the thread, no additional action will be taken - savepoints and/or sub-transactions are not currently used. Otherwise a transaction will be associated with the execution of the block. The label must not be the same as any other label used in statements containing this one. 2.9.9. Exception Handling If the EXCEPTION clause is used within a compound statement, any processing exception emitted from statements will be caught with the flow of execution transferring to EXCEPTION statements. Any block level transaction started by this block will commit if the exception handler successfully completes. If another exception or the original exception is emitted from the exception handler the transaction will rollback. Any temporary tables or variables specific to the BLOCK will not be available to the exception handler statements. Note Only processing exceptions, which are typically caused by errors originating at the sources or with function execution, are caught. A low-level internal error or Java RuntimeException will not be caught. To aid in the processing of a caught exception the EXCEPTION clause specifies a group name that exposes the significant fields of the exception. The exception group will contain: Variable Type Description STATE string The SQL State ERRORCODE integer The error or vendor code. In the case of an internal exception, this will be the integer suffix of the TEIIDxxxx code TEIIDCODE string The full event code. Typically TEIIDxxxx. EXCEPTION object The exception being caught, will be an instance of TeiidSQLException CHAIN object The chained exception or cause of the current exception Note JBoss Data Virtualization does not yet fully comply with the ANSI SQL specification on SQL State usage. For errors without an underlying SQLException cause, it is best to use the event code. The exception group name may not be the same as any higher level exception group or loop cursor name. Example 2.15. Example Exception Group Handling 2.9.10. If Statement An IF statement evaluates a condition and executes one of two statements depending on the result. You can nest IF statements to create complex branching logic. A dependent ELSE statement will execute its statement only if the IF statement evaluates to false. Usage: Example 2.16. Example If Statement IF ( var1 = 'North America') BEGIN ...statement... END ELSE BEGIN ...statement... END The criteria may be any valid boolean expression or an IS DISTINCT FROM predicate referencing row values: In this example, rowVal and rowValOther are references to row value group. Use this instead of update triggers on views to quickly determine if the row values are changing: IS DISTINCT FROM considers null values equivalent and never produces an UNKNOWN value. Note NULL values should be considered in the criteria of an IF statement. IS NULL criteria can be used to detect the presence of a NULL value. 2.9.11. Loop Statement A LOOP statement is an iterative control construct that is used to cursor through a result set. Usage: Syntax Rules The label must not be the same as any other label used in statements containing this one. 2.9.12. While Statement A WHILE statement is an iterative control construct that is used to execute a block repeatedly whenever a specified condition is met. Usage: Syntax Rules The label must not be the same as any other label used in statements containing this one. 2.9.13. Continue Statement A CONTINUE statement is used inside a LOOP or WHILE construct to continue with the loop by skipping over the rest of the statements in the loop. It must be used inside a LOOP or WHILE statement. Usage: Syntax Rules If the label is specified, it must exist on a containing LOOP or WHILE statement. If no label is specified, the statement will affect the closest containing LOOP or WHILE statement. 2.9.14. Break Statement A BREAK statement is used inside a LOOP or WHILE construct to break from the loop. It must be used inside a LOOP or WHILE statement. Usage: Syntax Rules If the label is specified, it must exist on a containing LOOP or WHILE statement. If no label is specified, the statement will affect the closest containing LOOP or WHILE statement. 2.9.15. Leave Statement A LEAVE statement is used inside a compound, LOOP, or WHILE construct to leave to the specified label. Usage: Syntax Rules The label must exist on a containing compound statement, LOOP, or WHILE statement. 2.9.16. Return Statement A Return statement gracefully exits the procedure and optionally returns a value. Usage: Syntax Rules If an expression is specified, the procedure must have a return parameter and the value must be implicitly convertible to the expected type. Even if the procedure has a return value, it is not required to specify a return value in a RETURN statement. 2.9.17. Error Statement An ERROR statement declares that the procedure has entered an error state and should abort. This statement will also roll back the current transaction, if one exists. Any valid expression can be specified after the ERROR keyword. Usage: Example 2.17. Example Error Statement ERROR 'Invalid input value: ' || nvl(Acct.GetBalance.AcctID, 'null'); An ERROR statement is equivalent to: 2.9.18. Raise Statement A RAISE statement is used to raise an exception or warning. When raising an exception, this statement will also roll back the current transaction, if one exists. Usage: Where exception may be a variable reference to an exception or an exception expression. Syntax Rules If SQLWARNING is specified, the exception will be sent to the client as a warning and the procedure will continue to execute. A null warning will be ignored. A null non-warning exception will still cause an exception to be raised. Example 2.18. Example Raise Statement 2.9.19. Exception Expression An exception expression creates an exception that can be raised or used as a warning. Usage: Syntax Rules Any of the values may be null; message and state are string expressions specifying the exception message and SQL state respectively. JBoss Data Virtualization does not yet fully comply with the ANSI SQL specification on SQL state usage, but you are allowed to set any SQL state you choose. code is an integer expression specifying the vendor code exception must be a variable reference to an exception or an exception expression and will be chained to the resulting exception as its parent.
[ "command [(WITH|WITHOUT) RETURN];", "SELECT * FROM MySchema.MyTable WHERE ColA > 100 WITHOUT RETURN; INSERT INTO MySchema.MyTable (ColA,ColB) VALUES (50, 'hi');", "/* Typically complex criteria would be formed based upon inputs to the procedure. In this simple example the criteria is references the using clause to isolate the SQL string from referencing a value from the procedure directly */ DECLARE string criteria = 'Customer.Accounts.Last = DVARS.LastName'; /* Now we create the desired SQL string */ DECLARE string sql_string = 'SELECT ID, First || '' '' || Last AS Name, Birthdate FROM Customer.Accounts WHERE ' || criteria; /* The execution of the SQL string will create the #temp table with the columns (ID, Name, Birthdate). Note that we also have the USING clause to bind a value to LastName, which is referenced in the criteria. */ EXECUTE IMMEDIATE sql_string AS ID integer, Name string, Birthdate date INTO #temp USING LastName='some name'; /* The temp table can now be used with the values from the Dynamic SQL */ loop on (SELECT ID from #temp) as myCursor", "DECLARE string crit = null; IF (AccountAccess.GetAccounts.ID IS NOT NULL) crit = '(Customer.Accounts.ID = DVARS.ID)'; ELSE IF (AccountAccess.GetAccounts.LastName IS NOT NULL) BEGIN IF (AccountAccess.GetAccounts.LastName == '%') ERROR \"Last name cannot be %\"; ELSE IF (LOCATE('%', AccountAccess.GetAccounts.LastName) < 0) crit = '(Customer.Accounts.Last = DVARS.LastName)'; ELSE crit = '(Customer.Accounts.Last LIKE DVARS.LastName)'; IF (AccountAccess.GetAccounts.bday IS NOT NULL) crit = '(' || crit || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))'; END ELSE ERROR \"ID or LastName must be specified.\"; EXECUTE IMMEDIATE 'SELECT ID, First || '' '' || Last AS Name, Birthdate FROM Customer.Accounts WHERE ' || crit USING ID=AccountAccess.GetAccounts.ID, LastName=AccountAccess.GetAccounts.LastName, BirthDay=AccountAccess.GetAccounts.Bday;", "EXECUTE IMMEDIATE <expression> AS x string INTO #temp; DECLARE string VARIABLES.RESULT = (SELECT x FROM #temp);", "criteria = '(' || criteria || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))';", "criteria = '(' || nvl(criteria, '(1 = 1)') || ' and (Customer.Accounts.Birthdate = DVARS.BirthDay))';", "/* This name does not need to match the expected update command symbol \"count\". */ EXECUTE IMMEDIATE <expression> AS x integer INTO #temp;", "DECLARE <type> [VARIABLES.]<name> [= <expression>];", "declare integer x; declare string VARIABLES.myvar = 'value';", "<variable reference> = <expression>;", "myString = 'Thank you'; VARIABLES.x = (SELECT Column1 FROM MySchema.MyTable);", "UPDATE FOO SET X = 1 WHERE Y = 2; DECLARE INTEGER UPDATED = VARIABLES.ROWCOUNT;", "[label :] BEGIN [[NOT] ATOMIC] statement* [EXCEPTION ex statement* ] END", "BEGIN DECLARE EXCEPTION e = SQLEXCEPTION 'this is bad' SQLSTATE 'xxxxx'; RAISE variables.e; EXCEPTION e IF (e.state = 'xxxxx') //in this trivial example, we'll always hit this branch and log the exception RAISE SQLWARNING e.exception; ELSE RAISE e.exception; END", "IF (criteria) block [ELSE block] END", "IF ( var1 = 'North America') BEGIN ...statement END ELSE BEGIN ...statement END", "rowVal IS [NOT] DISTINCT FROM rowValOther", "IF ( \"new\" IS DISTINCT FROM \"old\") BEGIN ...statement END", "[label :] LOOP ON <select statement> AS <cursorname> block", "[label :] WHILE <criteria> block", "CONTINUE [label];", "BREAK [label];", "LEAVE label;", "RETURN [expression];", "ERROR message;", "ERROR 'Invalid input value: ' || nvl(Acct.GetBalance.AcctID, 'null');", "RAISE SQLEXCEPTION message;", "RAISE [SQLWARNING] exception;", "RAISE SQLWARNING SQLEXCEPTION 'invalid' SQLSTATE '05000';", "SQLEXCEPTION message [SQLSTATE state [, code]] CHAIN exception" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-Procedural_Language
Chapter 9. Machine [machine.openshift.io/v1beta1]
Chapter 9. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineSpec defines the desired state of Machine status object MachineStatus defines the observed state of Machine 9.1.1. .spec Description MachineSpec defines the desired state of Machine Type object Property Type Description lifecycleHooks object LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. metadata object ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. providerID string ProviderID is the identification ID of the machine provided by the provider. This field must match the provider ID as seen on the node object corresponding to this machine. This field is required by higher level consumers of cluster-api. Example use case is cluster autoscaler with cluster-api as provider. Clean-up logic in the autoscaler compares machines to nodes to find out machines at provider which could not get registered as Kubernetes nodes. With cluster-api as a generic out-of-tree provider for autoscaler, this field is required by autoscaler to be able to have a provider view of the list of machines. Another list of nodes is queried from the k8s apiserver and then a comparison is done to find out unregistered machines and are marked for delete. This field will be set by the actuators and consumed by higher level entities like autoscaler that will be interfacing with cluster-api as generic provider. providerSpec object ProviderSpec details Provider-specific configuration to use during node creation. taints array The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. 9.1.2. .spec.lifecycleHooks Description LifecycleHooks allow users to pause operations on the machine at certain predefined points within the machine lifecycle. Type object Property Type Description preDrain array PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. preDrain[] object LifecycleHook represents a single instance of a lifecycle hook preTerminate array PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. preTerminate[] object LifecycleHook represents a single instance of a lifecycle hook 9.1.3. .spec.lifecycleHooks.preDrain Description PreDrain hooks prevent the machine from being drained. This also blocks further lifecycle events, such as termination. Type array 9.1.4. .spec.lifecycleHooks.preDrain[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 9.1.5. .spec.lifecycleHooks.preTerminate Description PreTerminate hooks prevent the machine from being terminated. PreTerminate hooks be actioned after the Machine has been drained. Type array 9.1.6. .spec.lifecycleHooks.preTerminate[] Description LifecycleHook represents a single instance of a lifecycle hook Type object Required name owner Property Type Description name string Name defines a unique name for the lifcycle hook. The name should be unique and descriptive, ideally 1-3 words, in CamelCase or it may be namespaced, eg. foo.example.com/CamelCase. Names must be unique and should only be managed by a single entity. owner string Owner defines the owner of the lifecycle hook. This should be descriptive enough so that users can identify who/what is responsible for blocking the lifecycle. This could be the name of a controller (e.g. clusteroperator/etcd) or an administrator managing the hook. 9.1.7. .spec.metadata Description ObjectMeta will autopopulate the Node created. Use this to indicate what labels, annotations, name prefix, etc., should be used when creating the Node. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names namespace string Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces ownerReferences array List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. ownerReferences[] object OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. 9.1.8. .spec.metadata.ownerReferences Description List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. Type array 9.1.9. .spec.metadata.ownerReferences[] Description OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field. Type object Required apiVersion kind name uid Property Type Description apiVersion string API version of the referent. blockOwnerDeletion boolean If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned. controller boolean If true, this reference points to the managing controller. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 9.1.10. .spec.providerSpec Description ProviderSpec details Provider-specific configuration to use during node creation. Type object Property Type Description value `` Value is an inlined, serialized representation of the resource configuration. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field, akin to component config. 9.1.11. .spec.taints Description The list of the taints to be applied to the corresponding Node in additive manner. This list will not overwrite any other taints added to the Node on an ongoing basis by other entities. These taints should be actively reconciled e.g. if you ask the machine controller to apply a taint and then manually remove the taint the machine controller will put it back) but not have the machine controller remove any taints Type array 9.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 9.1.13. .status Description MachineStatus defines the observed state of Machine Type object Property Type Description addresses array Addresses is a list of addresses assigned to the machine. Queried from cloud provider, if available. addresses[] object NodeAddress contains information for the node's address. conditions array Conditions defines the current state of the Machine conditions[] object Condition defines an observation of a Machine API resource operational state. errorMessage string ErrorMessage will be set in the event that there is a terminal problem reconciling the Machine and will contain a more verbose string suitable for logging and human consumption. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine's spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller's output. errorReason string ErrorReason will be set in the event that there is a terminal problem reconciling the Machine and will contain a succinct value suitable for machine interpretation. This field should not be set for transitive errors that a controller faces that are expected to be fixed automatically over time (like service outages), but instead indicate that something is fundamentally wrong with the Machine's spec or the configuration of the controller, and that manual intervention is required. Examples of terminal errors would be invalid combinations of settings in the spec, values that are unsupported by the controller, or the responsible controller itself being critically misconfigured. Any transient errors that occur during the reconciliation of Machines can be added as events to the Machine object and/or logged in the controller's output. lastOperation object LastOperation describes the last-operation performed by the machine-controller. This API should be useful as a history in terms of the latest operation performed on the specific machine. It should also convey the state of the latest-operation for example if it is still on-going, failed or completed successfully. lastUpdated string LastUpdated identifies when this status was last observed. nodeRef object NodeRef will point to the corresponding Node if it exists. phase string Phase represents the current phase of machine actuation. One of: Failed, Provisioning, Provisioned, Running, Deleting providerStatus `` ProviderStatus details a Provider-specific status. It is recommended that providers maintain their own versioned API types that should be serialized/deserialized from this field. 9.1.14. .status.addresses Description Addresses is a list of addresses assigned to the machine. Queried from cloud provider, if available. Type array 9.1.15. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required address type Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 9.1.16. .status.conditions Description Conditions defines the current state of the Machine Type array 9.1.17. .status.conditions[] Description Condition defines an observation of a Machine API resource operational state. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string Last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string A human readable message indicating details about the transition. This field may be empty. reason string The reason for the condition's last transition in CamelCase. The specific API may choose whether or not this field is considered a guaranteed API. This field may not be empty. severity string Severity provides an explicit classification of Reason code, so the users or machines can immediately understand the current situation and act accordingly. The Severity field MUST be set only when Status=False. status string Status of the condition, one of True, False, Unknown. type string Type of condition in CamelCase or in foo.example.com/CamelCase. Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. 9.1.18. .status.lastOperation Description LastOperation describes the last-operation performed by the machine-controller. This API should be useful as a history in terms of the latest operation performed on the specific machine. It should also convey the state of the latest-operation for example if it is still on-going, failed or completed successfully. Type object Property Type Description description string Description is the human-readable description of the last operation. lastUpdated string LastUpdated is the timestamp at which LastOperation API was last-updated. state string State is the current status of the last performed operation. E.g. Processing, Failed, Successful etc type string Type is the type of operation which was last performed. E.g. Create, Delete, Update etc 9.1.19. .status.nodeRef Description NodeRef will point to the corresponding Node if it exists. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 9.2. API endpoints The following API endpoints are available: /apis/machine.openshift.io/v1beta1/machines GET : list objects of kind Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines DELETE : delete collection of Machine GET : list objects of kind Machine POST : create a Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name} DELETE : delete a Machine GET : read the specified Machine PATCH : partially update the specified Machine PUT : replace the specified Machine /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name}/status GET : read status of the specified Machine PATCH : partially update status of the specified Machine PUT : replace status of the specified Machine 9.2.1. /apis/machine.openshift.io/v1beta1/machines HTTP method GET Description list objects of kind Machine Table 9.1. HTTP responses HTTP code Reponse body 200 - OK MachineList schema 401 - Unauthorized Empty 9.2.2. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines HTTP method DELETE Description delete collection of Machine Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Machine Table 9.3. HTTP responses HTTP code Reponse body 200 - OK MachineList schema 401 - Unauthorized Empty HTTP method POST Description create a Machine Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body Machine schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 202 - Accepted Machine schema 401 - Unauthorized Empty 9.2.3. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the Machine HTTP method DELETE Description delete a Machine Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Machine Table 9.10. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Machine Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Machine Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body Machine schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 401 - Unauthorized Empty 9.2.4. /apis/machine.openshift.io/v1beta1/namespaces/{namespace}/machines/{name}/status Table 9.16. Global path parameters Parameter Type Description name string name of the Machine HTTP method GET Description read status of the specified Machine Table 9.17. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Machine Table 9.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.19. HTTP responses HTTP code Reponse body 200 - OK Machine schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Machine Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.21. Body parameters Parameter Type Description body Machine schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK Machine schema 201 - Created Machine schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_apis/machine-machine-openshift-io-v1beta1
14.10. Configuring Certificate Transparency
14.10. Configuring Certificate Transparency Certificate System provides a basic version of Certificate Transparency (CT) V1 support (rfc 6962). It has the capability of issuing certificates with embedded Signed Certificate Time stamps (SCTs) from any trusted log where each deployment site choses to have its root CA cert included. You can also configure the system to support multiple CT logs. A minimum of one trusted CT log is required for this feature to work. Important It is the responsibility of the deployment site to establish its trust relationship with a trusted CT log server. To configure Certificate Transparency, edit the CA's CS.cfg file located in the /var/lib/pki/ instance name /ca/conf/CS.cfg directory. For more information on how to test your Certificate Transparency setup, see the Testing Certificate Transparency section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 14.10.1. ca.certTransparency.mode The ca.certTransparency.mode specifies one of three Certificate Transparency modes: disabled : issued certs will not carry the SCT extension enabled : issued certs will carry the SCT extension perProfile : certs enrolled through those profiles that contain the following policyset will carry the SCT extension: SignedCertificateTimestampListExtDefaultImpl The default value is disabled . 14.10.2. ca.certTransparency.log.num ca.certTransparency.log.num specifies the total number of CT logs defined in the configuration. Note Not all CT log entries that are defined are considered active; see ca.certTransparency.log.<id>.enable in Section 14.10.3, "ca.certTransparency.log.<id>.*" . 14.10.3. ca.certTransparency.log.<id>.* ca.certTransparency.log.<id>.* specifies information pertaining to the log <id>, where <id> is a unique id you assign to the CT log server to differentiate it from other CT logs. The parameter names follow each ca.certTransparency.log.<id>. and belong to the <id>: ca.certTransparency.log.<id>.enable specifies whether the <id> CT log is enabled (true) or disabled (false). ca.certTransparency.log.<id>.pubKey contains the base64 encoding of the CT log's public key. ca.certTransparency.log.<id>.url contains the base64 encoding of the CT log url. ca.certTransparency.log.<id>.version specifies the CT version number that the CT supports (as well as the CT log server); it currently only supports version 1.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configuring-certificate-transparency
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Chapter 2. Cluster Operation
Chapter 2. Cluster Operation This chapter provides a summary of the various cluster functions and features. From establishing cluster quorum to node fencing for isolation, these disparate features comprise the core functionality of the High Availability Add-On. 2.1. Quorum Overview In order to maintain cluster integrity and availability, cluster systems use a concept known as quorum to prevent data corruption and loss. A cluster has quorum when more than half of the cluster nodes are online. To mitigate the chance of data corruption due to failure, Pacemaker by default stops all resources if the cluster does not have quorum. Quorum is established using a voting system. When a cluster node does not function as it should or loses communication with the rest of the cluster, the majority working nodes can vote to isolate and, if needed, fence the node for servicing. For example, in a 6-node cluster, quorum is established when at least 4 cluster nodes are functioning. If the majority of nodes go offline or become unavailable, the cluster no longer has quorum Pacemaker stops clustered services. The quorum features in Pacemaker prevent what is also known as split-brain , a phenomenon where the cluster is separated from communication but each part continues working as separate clusters, potentially writing to the same data and possibly causing corruption or loss. Quorum support in the High Availability Add-On are provided by a Corosync plug-in called votequorum , which allows administrators to configure a cluster with a number of votes assigned to each system in the cluster and ensuring that only when a majority of the votes are present, cluster operations are allowed to proceed. In a situation where there is no majority (such as a two-node cluster where the internal communication network link becomes unavailable, resulting in a 50% cluster split), votequorum can be configured to have a tiebreaker policy, which administrators can configure to continue quorum using the remaining cluster nodes that are still in contact with the available cluster node that has the lowest node ID.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/ch-operation-haao
A.5. Explanation of Settings in the New Template Window
A.5. Explanation of Settings in the New Template Window The following table details the settings for the New Template window. Note The following tables do not include information on whether a power cycle is required because that information is not applicable to this scenario. New Template Settings Field Description/Action Name The name of the template. This is the name by which the template is listed in the Templates tab in the Administration Portal and is accessed via the REST API. This text field has a 40-character limit and must be a unique name within the data center with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. The name can be reused in different data centers in the environment. Description A description of the template. This field is recommended but not mandatory. Comment A field for adding plain text, human-readable comments regarding the template. Cluster The cluster with which the template is associated. This is the same as the original virtual machines by default. You can select any cluster in the data center. CPU Profile The CPU profile assigned to the template. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined on the cluster level based on quality of service entries created for data centers. Create as a Template Sub-Version Specifies whether the template is created as a new version of an existing template. Select this check box to access the settings for configuring this option. Root Template : The template under which the sub-template is added. Sub-Version Name : The name of the template. This is the name by which the template is accessed when creating a new virtual machine based on the template. If the virtual machine is stateless, the list of sub-versions will contain a latest option rather than the name of the latest sub-version. This option automatically applies the latest template sub-version to the virtual machine upon reboot. Sub-versions are particularly useful when working with pools of stateless virtual machines. Disks Allocation Alias - An alias for the virtual disk used by the template. By default, the alias is set to the same value as that of the source virtual machine. Virtual Size - The total amount of disk space that a virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. This value corresponds with the size, in GB, that was specified when the disk was created or edited. Format - The format of the virtual disk used by the template. The available options are QCOW2 and Raw. By default, the format is set to Raw. Target - The storage domain on which the virtual disk used by the template is stored. By default, the storage domain is set to the same value as that of the source virtual machine. You can select any storage domain in the cluster. Disk Profile - The disk profile to assign to the virtual disk used by the template. Disk profiles are created based on storage profiles defined in the data centers. For more information, see Creating a Disk Profile . Allow all users to access this Template Specifies whether a template is public or private. A public template can be accessed by all users, whereas a private template can only be accessed by users with the TemplateAdmin or SuperUser roles. Copy VM permissions Copies explicit permissions that have been set on the source virtual machine to the template. Seal Template (Linux only) Specifies whether a template is sealed. 'Sealing' is an operation that erases all machine-specific configurations from a filesystem, including SSH keys, UDEV rules, MAC addresses, system ID, and hostname. This setting prevents a virtual machine based on this template from inheriting the configuration of the source virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/explanation_of_settings_in_the_new_template_and_edit_template_windows
15.3.5. Installing via FTP, HTTP, or HTTPS
15.3.5. Installing via FTP, HTTP, or HTTPS Important When you provide a URL to an installation source, you must explicitly specify http:// or https:// or ftp:// as the protocol. The URL dialog applies only if you are installing from a FTP, HTTP, or HTTPS server (if you selected URL in the Installation Method dialog). This dialog prompts you for information about the FTP, HTTP, or HTTPS server from which you are installing Red Hat Enterprise Linux. If you used the repo=ftp or repo=http boot options, you already specified a server and path. Enter the name or IP address of the FTP, HTTP, or HTTPS site from which you are installing, and the name of the directory that contains the /images directory for your architecture. For example: /mirrors/redhat/rhel-6.9/Server/ppc64/ To install via a secure HTTPS connection, specify https:// as the protocol. Specify the address of a proxy server, and if necessary, provide a port number, username, and password. If everything was specified properly, a message box appears indicating that files are being retrieved from the server. If your FTP, HTTP, or HTTPS server requires user authentication, specify user and password as part of the URL as follows: {ftp|http|https}://<user>:<password>@<hostname>[:<port>]/<directory>/ For example: http://install:[email protected]/mirrors/redhat/rhel-6.9/Server/ppc64/ Figure 15.11. URL Setup Dialog Proceed with Chapter 16, Installing Using Anaconda .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-url-ppc
Chapter 4. Deploying the configured back ends
Chapter 4. Deploying the configured back ends To deploy the configured back ends, complete the following steps: Procedure Log in as the stack user. Run the following command to deploy the custom back end configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. For more information, see Modifying the overcloud environment in the Director Installation and Usage guide.
[ "openstack overcloud deploy --templates -e /home/stack/templates/custom-env.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/custom_block_storage_back_end_deployment_guide/proc_deploying-configured-back-ends_custom-cinder-back-end
Chapter 3. Installer-provisioned infrastructure
Chapter 3. Installer-provisioned infrastructure 3.1. Preparing to install a cluster on Azure To prepare for installation of an OpenShift Container Platform cluster on Azure, complete the following steps: You have selected a cluster installation method . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you have configured it to allow the sites that your cluster requires access to. 3.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.1.2. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.1.3. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.1.4. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources For more information about the Telemetry service, see About remote health monitoring 3.1.6. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where the Disk Encryption Set and encryption key are to be created. To prevent losing access to your keys when you destroy the cluster, create the Disk Encryption Set in a separate resource group from the one where you install the cluster. 2 Specifies the Azure location where the resource group is to be created. Set the environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault to be created. 2 Specifies the name of the encryption key to be created. 3 Specifies the name of the disk encryption set to be created. Set the environment variable for the ID of your Azure service principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal to be used for installation. Enable host-level encryption in Azure by running the following command: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure resource group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure Key Vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure service principal permission to read the Disk Encryption Set by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure 3.2. Installing a cluster on Azure You can install a cluster on Microsoft Azure that uses the default configuration options. 3.2.1. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. You have the application ID and password of a service principal. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Specify the following Azure parameter values for your subscription and service principal: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. azure service principal client id : Enter its application ID. azure service principal client secret : Enter its password. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.2.2. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources For more information about accessing and understanding the OpenShift Container Platform web console, see Accessing the web console . 3.2.3. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.3. Installing a cluster on Azure with customizations You can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, modify parameters in the install-config.yaml file before you install the cluster. 3.3.1. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker or control plane nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the compute section in the install-config.yaml file with values for publisher , offer , sku , and version before deploying the cluster. You may also update the controlPlane section to deploy control plane machines with the specified image details, or the defaultMachinePlatform section to deploy both control plane and compute machines with the specified image details. Use the latest available image for control plane and compute nodes. Sample install-config.yaml file with the Azure Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3 3.3.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 3.3.2.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.2.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.3.2.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 3.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 3.3.2.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.3.2.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.3.2.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 11 15 17 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.3.2.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 3.3.3. Configuring user-defined tags for Azure In OpenShift Container Platform, you can use tags for grouping resources and for managing resource access and cost. Tags are applied only to the resources created by the OpenShift Container Platform installation program and its core Operators such as Machine API Operator, Cluster Ingress Operator, Cluster Image Registry Operator. The OpenShift Container Platform consists of the following types of tags: OpenShift Container Platform tags By default, OpenShift Container Platform installation program attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible to the users. The format of the OpenShift Container Platform tags is kubernetes.io_cluster.<cluster_id>:owned , where <cluster_id> is the value of .status.infrastructureName in the infrastructure resource for the cluster. User-defined tags User-defined tags are manually created in install-config.yaml file during installation. When creating the user-defined tags, you must consider the following points: User-defined tags on Azure resources can only be defined during OpenShift Container Platform cluster creation, and cannot be modified after the cluster is created. Support for user-defined tags is available only for the resources created in the Azure Public Cloud. User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.18. 3.3.3.1. Creating user-defined tags for Azure To define the list of user-defined tags, edit the .platform.azure.userTags field in the install-config.yaml file. Procedure Specify the .platform.azure.userTags field as shown in the following install-config.yaml file: apiVersion: v1 baseDomain: example.com #... platform: azure: userTags: 1 <key>: <value> 2 #... 1 Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates. 2 Specify the key and value. You can configure a maximum of 10 tags for resource group and resources. Tag keys are case-insensitive. For more information on requirements for specifying user-defined tags, see "User-defined tags requirements" section. Example install-config.yaml file apiVersion: v1 baseDomain: example.com #... platform: azure: userTags: createdBy: user environment: dev #... Verification Access the list of created user-defined tags for the Azure resources by running the following command: USD oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}' Example output [ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ] 3.3.3.2. User-defined tags requirements The user-defined tags have the following requirements: A tag key must have a maximum of 128 characters. A tag key must begin with a letter. A tag key must end with a letter, number or underscore. A tag key must contain only letters, numbers, underscores( _ ), periods( . ), and hyphens( - ). A tag key must not be specified as name . A tag key must not have the following prefixes: kubernetes.io openshift.io microsoft azure windows A tag value must have a maximum of 256 characters. For more information about Azure tags, see Azure user-defined tags . 3.3.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 3.3.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.3.4.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 3.3.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.3.4.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 3.3.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.3.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.3.7. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.4. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.18, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 3.4.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 3.4.1.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.1.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.3. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.4.1.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 3.4. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 3.4.1.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.4.1.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.4.1.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: 12 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 14 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 15 region: centralus 16 resourceGroupName: existing_resource_group 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 1 11 16 18 Required. The installation program prompts you for this value. 2 6 12 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 14 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 15 Specify the name of the resource group that contains the DNS zone for your base domain. 17 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.4.1.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.4.2. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork nodeNetworking For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.4.3. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 3.4.4. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.4.4.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.3. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.4. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.5. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.6. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.7. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.8. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.9. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.10. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.11. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.12. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 3.4.5. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 3.4.6. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 3.4.6.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.4.6.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 3.4.6.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.4.6.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 3.4.6.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.4.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.4.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.5. Installing a cluster on Azure in a disconnected environment In OpenShift Container Platform version 4.18, you can install a cluster on Microsoft Azure in a restricted network by creating an internal mirror of the installation release content on an existing Azure Virtual Network (VNet). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster requires internet access to use the Azure APIs. 3.5.1. Prerequisites You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VNet in Azure. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VNet. You must use a user-provisioned VNet that satisfies one of the following requirements: The VNet contains the mirror registry. The VNet has firewall rules or a peering connection to access the mirror registry hosted elsewhere. 3.5.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.5.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Restricted cluster with Azure Firewall You can use Azure Firewall to restrict the outbound routing for the Virtual Network (VNet) that is used to install the OpenShift Container Platform cluster. For more information, see providing user-defined routing with Azure Firewall . You can create a OpenShift Container Platform cluster in a restricted network by using VNet with Azure Firewall and configuring the user-defined routing. Important If you are using Azure Firewall for restricting internet access, you must set the publish field to Internal in the install-config.yaml file. This is because Azure Firewall does not work properly with Azure public load balancers . 3.5.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.18, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 3.5.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 3.5.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 3.13. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x * Allows connections to Azure APIs. You must set a Destination Service Tag to AzureCloud . [1] x x * Denies connections to the internet. You must set a Destination Service Tag to Internet . [1] x x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed. For more information, see "Configuring your firewall" in "Additional resources". Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 3.14. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.15. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.5.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 3.5.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. Additional resources About the OVN-Kubernetes network plugin Configuring your firewall 3.5.4. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 3.5.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.16. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 3.5.4.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.5. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.5.4.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 3.6. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 3.5.4.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.5.4.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.5.4.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: UserDefinedRouting 21 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 26 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 27 1 11 15 22 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 If you use an existing VNet, specify the name of the resource group that contains it. 18 If you use an existing VNet, specify its name. 19 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 20 If you use an existing VNet, specify the name of the subnet to host the compute machines. 21 When using Azure Firewall to restrict Internet access, you must configure outbound routing to send traffic through the Azure Firewall. Configuring user-defined routing prevents exposing external endpoints in your cluster. 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 25 Provide the contents of the certificate file that you used for your mirror registry. 26 Provide the imageContentSources section from the output of the command to mirror the repository. 27 How to publish the user-facing endpoints of your cluster. When using Azure Firewall to restrict Internet access, set publish to Internal to deploy a private cluster. The user-facing endpoints then cannot be accessed from the internet. The default value is External . 3.5.4.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.5.5. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 3.5.5.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.5.5.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 3.5.5.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.5.5.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 3.5.5.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.5.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.5.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.5.8. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.6. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.18, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.6.1. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.18, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 3.6.1.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 3.6.1.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 3.17. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed. For more information, see "Configuring your firewall" in "Additional resources". Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 3.18. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.19. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.6.1.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 3.6.1.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. Additional resources About the OVN-Kubernetes network plugin Configuring your firewall 3.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 3.6.2.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.20. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.6.2.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.7. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.6.2.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 3.8. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 3.6.2.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.6.2.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.6.2.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 11 15 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 If you use an existing VNet, specify the name of the resource group that contains it. 18 If you use an existing VNet, specify its name. 19 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 20 If you use an existing VNet, specify the name of the subnet to host the compute machines. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 3.6.2.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 3.6.3. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 3.6.3.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.6.3.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 3.6.3.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.6.3.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 3.6.3.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.6.4. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.6.5. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.7. Installing a private cluster on Azure In OpenShift Container Platform version 4.18, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 3.7.1. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.7.1.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 3.7.1.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 3.7.1.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 3.7.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.18, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 3.7.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 3.7.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 3.21. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed. For more information, see "Configuring your firewall" in "Additional resources". Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 3.22. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.23. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.7.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 3.7.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. Additional resources About the OVN-Kubernetes network plugin Configuring your firewall 3.7.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 3.7.3.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.24. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.7.3.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.9. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.7.3.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 3.10. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 3.7.3.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.7.3.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.7.3.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: UserDefinedRouting 21 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 publish: Internal 25 1 11 15 22 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 17 If you use an existing VNet, specify the name of the resource group that contains it. 18 If you use an existing VNet, specify its name. 19 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 20 If you use an existing VNet, specify the name of the subnet to host the compute machines. 21 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 25 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.7.3.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 3.7.4. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 3.7.4.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 3.7.4.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 3.7.4.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 3.7.4.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 3.7.4.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 3.7.5. Optional: Preparing a private Microsoft Azure cluster for a private image registry By installing a private image registry on a private Microsoft Azure cluster, you can create private storage endpoints. Private storage endpoints disable public facing endpoints to the registry's storage account, adding an extra layer of security to your OpenShift Container Platform deployment. Important Do not install a private image registry on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. Use the following guide to prepare your private Microsoft Azure cluster for installation with a private image registry. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI (oc). You have prepared an install-config.yaml that includes the following information: The publish field is set to Internal You have set the permissions for creating a private storage endpoint. For more information, see "Azure permissions for installer-provisioned infrastructure". Procedure If you have not previously created installation manifest files, do so by running the following command: USD ./openshift-install create manifests --dir <installation_directory> This command displays the following messages: Example output INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift Create an image registry configuration object and pass in the networkResourceGroupName , subnetName , and vnetName provided by Microsoft Azure. For example: USD touch imageregistry-config.yaml apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: "Managed" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal 1 Optional. If you have an existing VNet and subnet setup, replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Optional. If you have an existing VNet and subnet setup, replace <subnet_name> with the name of the existing compute subnet within the specified resource group. 3 Optional. If you have an existing VNet and subnet setup, replace <vnet_name> with the name of the existing virtual network (VNet) in the specified resource group. Note The imageregistry-config.yaml file is consumed during the installation process. If desired, you must back it up before installation. Move the imageregistry-config.yaml file to the <installation_directory/manifests> folder by running the following command: USD mv imageregistry-config.yaml <installation_directory/manifests/> steps After you have moved the imageregistry-config.yaml file to the <installation_directory/manifests> folder and set the required permissions, proceed to "Deploying the cluster". Additional resources For the list of permissions needed to create a private storage endpoint, see Required Azure permissions for installer-provisioned infrastructure . 3.7.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.7.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.7.8. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 3.8. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.18, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 3.8.1. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 3.8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 3.8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 3.8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 3.8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. 3.8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.18, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 3.8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 3.8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 3.25. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs. A network security group rule is not needed. For more information, see "Configuring your firewall" in "Additional resources". Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 3.26. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.27. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 3.8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. Additional resources About the OVN-Kubernetes network plugin Configuring your firewall 3.8.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 3.8.4.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.28. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.8.4.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 3.11. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 3.8.4.3. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes . Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable trusted launch only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch only on compute node by adding the following stanza: compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled Enable trusted launch on all nodes by adding the following stanza: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled 3.8.4.4. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Edit the install-config.yaml file before deploying your cluster: Enable confidential VMs only on control plane by adding the following stanza: controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs only on compute nodes by adding the following stanza: compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly Enable confidential VMs on all nodes by adding the following stanza: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 3.8.4.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: usgovvirginia resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzureUSGovernmentCloud 21 pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 publish: Internal 25 1 11 22 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 9 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 10 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 21 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 25 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 3.8.4.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 3.8.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.8.6. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8.7. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2", "export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3", "export CLUSTER_SP_ID=\"<service_principal_id>\" 1", "az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"", "az feature show --namespace Microsoft.Compute --name EncryptionAtHost", "az provider register -n Microsoft.Compute", "az group create --name USDRESOURCEGROUP --location USDLOCATION", "az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true", "az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software", "KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)", "KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)", "az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL", "DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)", "az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get", "DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)", "az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com # platform: azure: userTags: 1 <key>: <value> 2 #", "apiVersion: v1 baseDomain: example.com # platform: azure: userTags: createdBy: user environment: dev #", "oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'", "[ [ { \"key\": \"createdBy\", \"value\": \"user\" }, { \"key\": \"environment\", \"value\": \"dev\" } ] ]", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: 12 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 14 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 15 region: centralus 16 resourceGroupName: existing_resource_group 17 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: UserDefinedRouting 21 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 26 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 27", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "mkdir <installation_directory>", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 networkResourceGroupName: vnet_resource_group 17 virtualNetwork: vnet 18 controlPlaneSubnet: control_plane_subnet 19 computeSubnet: compute_subnet 20 outboundType: UserDefinedRouting 21 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 publish: Internal 25", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create manifests --dir <installation_directory>", "INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift", "touch imageregistry-config.yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: \"Managed\" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal", "mv imageregistry-config.yaml <installation_directory/manifests/>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "mkdir <installation_directory>", "controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled", "controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 8 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 9 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 10 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: usgovvirginia resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzureUSGovernmentCloud 21 pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 publish: Internal 25", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure/installer-provisioned-infrastructure
Chapter 3. Installation configuration parameters for IBM Z and IBM LinuxONE
Chapter 3. Installation configuration parameters for IBM Z and IBM LinuxONE Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. 3.1. Available installation configuration parameters for IBM Z The following tables specify the required, optional, and IBM Z-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 3.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 3.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are s390x (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_z_and_ibm_linuxone/installation-config-parameters-ibm-z
function::task_ancestry
function::task_ancestry Name function::task_ancestry - The ancestry of the given task Synopsis Arguments task task_struct pointer with_time set to 1 to also print the start time of processes (given as a delta from boot time) Description Return the ancestry of the given task in the form of " grandparent_process=>parent_process=>process " .
[ "task_ancestry:string(task:long,with_time:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-ancestry
Chapter 2. Changes between OptaPlanner 8.2.0 and OptaPlanner 8.3.0
Chapter 2. Changes between OptaPlanner 8.2.0 and OptaPlanner 8.3.0 The changes listed in this section were made between OptaPlanner 8.2.0 and OptaPlanner 8.3.0. ConstraintMatch.compareTo() inconsistent with equals() Minor The equals() override in ConstraintMatch has been removed. As a result, two different ConstraintMatch instances are never considered equal. This contrasts with the compareTo() method, which continues to consider two instances equal if all their field values are equal.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/optaplanner-8-ref_developing-solvers
function::ftrace
function::ftrace Name function::ftrace - Send a message to the ftrace ring-buffer Synopsis Arguments msg The formatted message string Description If the ftrace ring-buffer is configured & available, see /debugfs/tracing/trace for the message. Otherwise, the message may be quietly dropped. An implicit end-of-line is added.
[ "ftrace(msg:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ftrace
Chapter 7. Pipelines as Code command reference
Chapter 7. Pipelines as Code command reference You can use the tkn pac CLI tool to control Pipelines as Code. You can also configure Pipelines as Code logging with the TektonConfig custom resource and use the oc command to view Pipelines as Code logs. 7.1. Pipelines as Code command reference The tkn pac CLI tool offers the following capabilities: Bootstrap Pipelines as Code installation and configuration. Create a new Pipelines as Code repository. List all Pipelines as Code repositories. Describe a Pipelines as Code repository and the associated runs. Generate a simple pipeline run to get started. Resolve a pipeline run as if it was executed by Pipelines as Code. Tip You can use the commands corresponding to the capabilities for testing and experimentation, so that you don't have to make changes to the Git repository containing the application source code. 7.1.1. Basic syntax USD tkn pac [command or options] [arguments] 7.1.2. Global options USD tkn pac --help 7.1.3. Utility commands 7.1.3.1. bootstrap Table 7.1. Bootstrapping Pipelines as Code installation and configuration Command Description tkn pac bootstrap Installs and configures Pipelines as Code for Git repository hosting service providers, such as GitHub and GitHub Enterprise. tkn pac bootstrap --nightly Installs the nightly build of Pipelines as Code. tkn pac bootstrap --route-url <public_url_to_ingress_spec> Overrides the OpenShift route URL. By default, tkn pac bootstrap detects the OpenShift route, which is automatically associated with the Pipelines as Code controller service. If you do not have an OpenShift Container Platform cluster, it asks you for the public URL that points to the ingress endpoint. tkn pac bootstrap github-app Create a GitHub application and secrets in the openshift-pipelines namespace. 7.1.3.2. repository Table 7.2. Managing Pipelines as Code repositories Command Description tkn pac create repository Creates a new Pipelines as Code repository and a namespace based on the pipeline run template. tkn pac list Lists all the Pipelines as Code repositories and displays the last status of the associated runs. tkn pac repo describe Describes a Pipelines as Code repository and the associated runs. 7.1.3.3. generate Table 7.3. Generating pipeline runs using Pipelines as Code Command Description tkn pac generate Generates a simple pipeline run. When executed from the directory containing the source code, it automatically detects current Git information. In addition, it uses basic language detection capability and adds extra tasks depending on the language. For example, if it detects a setup.py file at the repository root, the pylint task is automatically added to the generated pipeline run. 7.1.3.4. resolve Table 7.4. Resolving and executing pipeline runs using Pipelines as Code Command Description tkn pac resolve Executes a pipeline run as if it is owned by the Pipelines as Code on service. tkn pac resolve -f .tekton/pull-request.yaml | oc apply -f - Displays the status of a live pipeline run that uses the template in .tekton/pull-request.yaml . Combined with a Kubernetes installation running on your local machine, you can observe the pipeline run without generating a new commit. If you run the command from a source code repository, it attempts to detect the current Git information and automatically resolve parameters such as current revision or branch. tkn pac resolve -f .tekton/pr.yaml -p revision=main -p repo_name=<repository_name> Executes a pipeline run by overriding default parameter values derived from the Git repository. The -f option can also accept a directory path and apply the tkn pac resolve command on all .yaml or .yml files in that directory. You can also use the -f flag multiple times in the same command. You can override the default information gathered from the Git repository by specifying parameter values using the -p option. For example, you can use a Git branch as a revision and a different repository name. 7.2. Configuring Pipelines as Code logging You can configure Pipelines as Code logging by editing the pac-config-logging config map in the TektonConfig custom resource (CR). Prerequisites You have Pipelines as Code installed on your cluster. Procedure In the Administrator perspective of the web console, go to Administration CustomResourceDefinitions . Use the Search by name field to search for the tektonconfigs.operator.tekton.dev custom resource definition (CRD) and click TektonConfig to view the CRD Details page. Click the Instances tab. Click the config instance to view the TektonConfig CR details. Click the YAML tab. Edit the loglevel. fields under the .options.configMaps.pac-config-logging.data parameter based on your requirements. Example TektonConfig CR with the Pipelines as Code log level fields set to warn apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: options: configMaps: pac-config-logging: data: loglevel.pac-watcher: warn 1 loglevel.pipelines-as-code-webhook: warn 2 loglevel.pipelinesascode: warn 3 zap-logger-config: | { "level": "info", "development": false, "sampling": { "initial": 100, "thereafter": 100 }, "outputPaths": ["stdout"], "errorOutputPaths": ["stderr"], "encoding": "json", "encoderConfig": { "timeKey": "ts", "levelKey": "level", "nameKey": "logger", "callerKey": "caller", "messageKey": "msg", "stacktraceKey": "stacktrace", "lineEnding": "", "levelEncoder": "", "timeEncoder": "iso8601", "durationEncoder": "", "callerEncoder": "" } } 1 The log level for the pipelines-as-code-watcher component. 2 The log level for the pipelines-as-code-webhook component. 3 The log level for the pipelines-as-code-controller component. Optional: Create a custom logging config map for the Pipelines as Code components by changing the .env.value for each component under the .options.deployments field. The example below shows the configuration with the custom config map called custom-pac-config-logging . Example TektonConfig CR with the Pipelines as Code custom logging config map apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true options: configMaps: custom-pac-config-logging: data: loglevel.pac-watcher: warn loglevel.pipelines-as-code-webhook: warn loglevel.pipelinesascode: warn zap-logger-config: | { "level": "info", "development": false, "sampling": { "initial": 100, "thereafter": 100 }, "outputPaths": ["stdout"], "errorOutputPaths": ["stderr"], "encoding": "json", "encoderConfig": { "timeKey": "ts", "levelKey": "level", "nameKey": "logger", "callerKey": "caller", "messageKey": "msg", "stacktraceKey": "stacktrace", "lineEnding": "", "levelEncoder": "", "timeEncoder": "iso8601", "durationEncoder": "", "callerEncoder": "" } } deployments: pipelines-as-code-controller: spec: template: spec: containers: - name: pac-controller env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging pipelines-as-code-watcher: spec: template: spec: containers: - name: pac-watcher env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging pipelines-as-code-webhook: spec: template: spec: containers: - name: pac-webhook env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging 7.3. Splitting Pipelines as Code logs by namespace Pipelines as Code logs contain the namespace information to make it possible to filter logs or split the logs by a particular namespace. For example, to view the Pipelines as Code logs related to the mynamespace namespace, enter the following command: USD oc logs pipelines-as-code-controller-<unique-id> -n openshift-pipelines | grep mynamespace 1 1 Replace pipelines-as-code-controller-<unique-id> with the Pipelines as Code controller name. 7.4. Additional resources Installing OpenShift Pipelines Installing tkn
[ "tkn pac [command or options] [arguments]", "tkn pac --help", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: options: configMaps: pac-config-logging: data: loglevel.pac-watcher: warn 1 loglevel.pipelines-as-code-webhook: warn 2 loglevel.pipelinesascode: warn 3 zap-logger-config: | { \"level\": \"info\", \"development\": false, \"sampling\": { \"initial\": 100, \"thereafter\": 100 }, \"outputPaths\": [\"stdout\"], \"errorOutputPaths\": [\"stderr\"], \"encoding\": \"json\", \"encoderConfig\": { \"timeKey\": \"ts\", \"levelKey\": \"level\", \"nameKey\": \"logger\", \"callerKey\": \"caller\", \"messageKey\": \"msg\", \"stacktraceKey\": \"stacktrace\", \"lineEnding\": \"\", \"levelEncoder\": \"\", \"timeEncoder\": \"iso8601\", \"durationEncoder\": \"\", \"callerEncoder\": \"\" } }", "apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true options: configMaps: custom-pac-config-logging: data: loglevel.pac-watcher: warn loglevel.pipelines-as-code-webhook: warn loglevel.pipelinesascode: warn zap-logger-config: | { \"level\": \"info\", \"development\": false, \"sampling\": { \"initial\": 100, \"thereafter\": 100 }, \"outputPaths\": [\"stdout\"], \"errorOutputPaths\": [\"stderr\"], \"encoding\": \"json\", \"encoderConfig\": { \"timeKey\": \"ts\", \"levelKey\": \"level\", \"nameKey\": \"logger\", \"callerKey\": \"caller\", \"messageKey\": \"msg\", \"stacktraceKey\": \"stacktrace\", \"lineEnding\": \"\", \"levelEncoder\": \"\", \"timeEncoder\": \"iso8601\", \"durationEncoder\": \"\", \"callerEncoder\": \"\" } } deployments: pipelines-as-code-controller: spec: template: spec: containers: - name: pac-controller env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging pipelines-as-code-watcher: spec: template: spec: containers: - name: pac-watcher env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging pipelines-as-code-webhook: spec: template: spec: containers: - name: pac-webhook env: - name: CONFIG_LOGGING_NAME value: custom-pac-config-logging", "oc logs pipelines-as-code-controller-<unique-id> -n openshift-pipelines | grep mynamespace 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_as_code/pac-command-reference_managing-pipeline-runs-pac