title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 93. ExternalConfiguration schema reference | Chapter 93. ExternalConfiguration schema reference The type ExternalConfiguration has been deprecated. Please use KafkaConnectTemplate instead. Used in: KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of ExternalConfiguration schema properties Configures external storage properties that define configuration options for Kafka Connect connectors. You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec or KafkaMirrorMaker2.spec . When applied, the environment variables and volumes are available for use when developing your connectors. For more information, see Loading configuration values from external sources . 93.1. ExternalConfiguration schema properties Property Property type Description env ExternalConfigurationEnv array The env property has been deprecated. The external configuration environment variables are deprecated and will be removed in the future. Please use the environment variables in a container template instead. Makes data from a Secret or ConfigMap available in the Kafka Connect pods as environment variables. volumes ExternalConfigurationVolumeSource array The volumes property has been deprecated. The external configuration volumes are deprecated and will be removed in the future. Please use the additional volumes and volume mounts in pod and container templates instead to mount additional secrets or config maps. Makes data from a Secret or ConfigMap available in the Kafka Connect pods as volumes. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ExternalConfiguration-reference |
7.129. mlocate | 7.129. mlocate 7.129.1. RHBA-2015:0676 - mlocate bug fix update Updated mlocate packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The mlocate packages provide a locate/updatedb implementation, and keep a database of all existing files. The database allows files to be looked up by names. Bug Fixes BZ# 1012534 Prior to this update, the cron script which is included in the mlocate packages had permissions which were too loose. Consequently, mlocate did not comply with the Operating System Security Requirements Guide. This update changes the permissions of the cron script to 0700, as required by the guide. BZ# 1023779 The updatedb utility automatically excludes file systems which are marked as "nodev" in the /proc/filesystems file. The ZFS file system is also marked this way despite the fact it actually stores data on a physical device. As a consequence, ZFS volumes were not previously indexed. This update adds an exception for ZFS, which allows updatedb to index files stored on this file system and the locate utility to find such files. BZ# 1182304 Previously, the /var/lib/mlocate/mlocate.db database file was declared in the mlocate package metadata as belonging to the "root" user and group, and having the "644" permissions. However, in reality, the file belonged to the "slocate" group and had the "640" permissions. This discrepancy caused problems reported by OpenSCAP compliance checking tools. With this update, the database file is declared correctly in the metadata, which allows the package in an unaltered state to pass OpenSCAP compliance checks. BZ# 1168301 The updatedb utility did not exclude GPFS cluster file systems, which can hold billions of files. As a consequence, updatedb caused very high I/O load on systems using GPFS. With this update, GPFS volumes are skipped by updatedb. As a result, files stored on this file system are no longer indexed, and running updatedb on systems with GPFS volumes does not cause too high I/O load. Users of mlocate are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-mlocate |
Chapter 5. Running the AMQ Broker examples | Chapter 5. Running the AMQ Broker examples AMQ Broker ships with many example programs that demonstrate basic and advanced features of the product. You can run these examples to become familiar with the capabilities of AMQ Broker. To run the AMQ Broker examples, you must first set up your machine by installing and configuring Apache Maven and the AMQ Maven repository. Then, you use Maven to run the AMQ Broker example programs. 5.1. Setting up your machine to run the AMQ Broker examples Before you can run the included AMQ Broker example programs, you must first download and install Maven and the AMQ Maven repository, and configure the Maven settings file. 5.1.1. Downloading and installing Maven Maven is required to run the AMQ Broker examples. Procedure Go to the Apache Maven Download page and download the latest distribution for your operating system. Install Maven for your operating system. For more information, see Installing Apache Maven . Additional resources For more information about Maven, see Introduction to Apache Maven . 5.1.2. Downloading and installing the AMQ Maven repository After Maven is installed on your machine, you download and install the AMQ Maven repository. This repository is available on the Red Hat Customer Portal. In a web browser, navigate to https://access.redhat.com/downloads/ and log in. The Product Downloads page is displayed. In the Integration and Automation section, click the Red Hat AMQ Broker link. The Software Downloads page is displayed. Select the desired AMQ Broker version from the Version drop-down menu. On the Releases tab, click the Download link for the AMQ Broker Maven Repository. The AMQ Maven repository file is downloaded as a zip file. On your machine, unzip the AMQ repository file into a directory of your choosing. A new directory is created on your machine, which contains the Maven repository in a subdirectory named maven-repository/ . 5.1.3. Configuring the Maven settings file After downloading and installing the AMQ Maven repository, you must add the repository to the Maven settings file. Procedure Open the Maven settings.xml file. The settings.xml file is typically located in the USD{user.home}/.m2/ directory. For Linux, this is ~/.m2/ For Windows, this is \Documents and Settings\.m2\ or \Users\.m2\ If you do not find a settings.xml file in USD{user.home}/.m2/ , there is a default version located in the conf/ directory of your Maven installation. Copy the default settings.xml file into the USD{user.home}/.m2/ directory. In the <profiles> element, add a profile for the AMQ Maven repository. <!-- Configure the JBoss AMQ Maven repository --> <profile> <id>jboss-amq-maven-repository</id> <repositories> <repository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 1 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 2 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> 1 2 Replace <JBoss-AMQ-repository-path> with the location of the Maven repository that you installed. Typically, this location ends with /maven-repository . For example: <url>file:///path/to/repo/amq-broker-7.2.0-maven-repository/maven-repository</url> In the <activeProfiles> element, set the AMQ Maven repository to be active: <activeProfiles> <activeProfile>jboss-amq-maven-repository</activeProfile> ... </activeProfiles> If you copied the default settings.xml from your Maven installation, uncomment the <active-profiles> section if it was commented out by default. Save and close settings.xml . Remove the cached USD{user.home}/.m2/repository/ directory. If your Maven repository contains outdated artifacts, you may encounter one of the following Maven error messages when you build or deploy your project: Missing artifact <artifact-name> [ERROR] Failed to execute goal on project <project-name>; Could not resolve dependencies for <project-name> 5.2. AMQ Broker example programs AMQ Broker ships with more than 90 example programs that demonstrate how to use AMQ Broker features and the supported messaging protocols. The example programs are located in <install_dir> /examples , and include the following: Features Broker-specific features such as: Clustered - examples showing load balancing and distribution capabilities HA - examples showing failover and reconnection capabilities Perf - examples allowing you to run a few performance tests on the server Standard - examples demonstrating various broker features Sub-modules - examples of integrated external modules Protocols Examples for each of the supported messaging protocols: AMQP MQTT OpenWire STOMP Additional resources For a description of each example program, see Examples in the Apache Artemis documentation. 5.3. Running an AMQ Broker example program AMQ Broker ships with many example programs that demonstrate basic and advanced features of the product. You use Maven to run these programs. Prerequisites Your machine must be set up to run the AMQ Broker examples. For more information, see Section 5.1, "Setting up your machine to run the AMQ Broker examples" . Procedure Navigate to the directory of the example you want to run. The example programs are located in <install_dir> /examples . For example: USD cd <install_dir> /examples/features/standard/queue Use the mvn clean verify command to run the example program. Maven starts the broker and runs the example program. The first time you run the example program, Maven downloads any missing dependencies, which may take a while to run. In this case, the queue example program is run, which creates a producer, sends a test message, and then creates a consumer that receives the message: USD mvn clean verify [INFO] Scanning for projects... [INFO] [INFO] -------------< org.apache.activemq.examples.broker:queue >-------------- [INFO] Building ActiveMQ Artemis JMS Queue Example 2.6.1.amq-720004-redhat-1 [INFO] --------------------------------[ jar ]--------------------------------- ... server-out:2018-12-05 16:37:57,023 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [0.0.0.0, nodeID=06f529d3-f8d6-11e8-9bea-0800271b03bd] [INFO] Server started [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:runClient (runClient) @ queue --- Sent message: This is a text message Received message: This is a text message [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:cli (stop) @ queue --- server-out:2018-12-05 16:37:59,519 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [06f529d3-f8d6-11e8-9bea-0800271b03bd] stopped, uptime 3.734 seconds server-out:Server stopped! [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 48.681 s [INFO] Finished at: 2018-12-05T16:37:59-05:00 [INFO] ------------------------------------------------------------------------ Note Some of the example programs use UDP clustering, and may not work in your environment by default. To run these examples successfully, redirect traffic directed to 224.0.0.0 to the loopback interface: USD sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo | [
"<!-- Configure the JBoss AMQ Maven repository --> <profile> <id>jboss-amq-maven-repository</id> <repositories> <repository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 1 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 2 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<url>file:///path/to/repo/amq-broker-7.2.0-maven-repository/maven-repository</url>",
"<activeProfiles> <activeProfile>jboss-amq-maven-repository</activeProfile> </activeProfiles>",
"cd <install_dir> /examples/features/standard/queue",
"mvn clean verify [INFO] Scanning for projects [INFO] [INFO] -------------< org.apache.activemq.examples.broker:queue >-------------- [INFO] Building ActiveMQ Artemis JMS Queue Example 2.6.1.amq-720004-redhat-1 [INFO] --------------------------------[ jar ]--------------------------------- server-out:2018-12-05 16:37:57,023 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [0.0.0.0, nodeID=06f529d3-f8d6-11e8-9bea-0800271b03bd] [INFO] Server started [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:runClient (runClient) @ queue --- Sent message: This is a text message Received message: This is a text message [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:cli (stop) @ queue --- server-out:2018-12-05 16:37:59,519 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [06f529d3-f8d6-11e8-9bea-0800271b03bd] stopped, uptime 3.734 seconds server-out:Server stopped! [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 48.681 s [INFO] Finished at: 2018-12-05T16:37:59-05:00 [INFO] ------------------------------------------------------------------------",
"sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/getting_started_with_amq_broker/running-broker-examples-getting-started |
Chapter 10. Managing bare-metal hosts | Chapter 10. Managing bare-metal hosts When you install OpenShift Container Platform on a bare-metal cluster, you can provision and manage bare-metal nodes by using machine and machineset custom resources (CRs) for bare-metal hosts that exist in the cluster. 10.1. About bare metal hosts and nodes To provision a Red Hat Enterprise Linux CoreOS (RHCOS) bare metal host as a node in your cluster, first create a MachineSet custom resource (CR) object that corresponds to the bare metal host hardware. Bare metal host compute machine sets describe infrastructure components specific to your configuration. You apply specific Kubernetes labels to these compute machine sets and then update the infrastructure components to run on only those machines. Machine CR's are created automatically when you scale up the relevant MachineSet containing a metal3.io/autoscale-to-hosts annotation. OpenShift Container Platform uses Machine CR's to provision the bare metal node that corresponds to the host as specified in the MachineSet CR. 10.2. Maintaining bare metal hosts You can maintain the details of the bare metal hosts in your cluster from the OpenShift Container Platform web console. Navigate to Compute Bare Metal Hosts , and select a task from the Actions drop down menu. Here you can manage items such as BMC details, boot MAC address for the host, enable power management, and so on. You can also review the details of the network interfaces and drives for the host. You can move a bare metal host into maintenance mode. When you move a host into maintenance mode, the scheduler moves all managed workloads off the corresponding bare metal node. No new workloads are scheduled while in maintenance mode. You can deprovision a bare metal host in the web console. Deprovisioning a host does the following actions: Annotates the bare metal host CR with cluster.k8s.io/delete-machine: true Scales down the related compute machine set Note Powering off the host without first moving the daemon set and unmanaged static pods to another node can cause service disruption and loss of data. Additional resources Adding compute machines to bare metal 10.2.1. Adding a bare metal host to the cluster using the web console You can add bare metal hosts to the cluster in the web console. Prerequisites Install an RHCOS cluster on bare metal. Log in as a user with cluster-admin privileges. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New with Dialog . Specify a unique name for the new bare metal host. Set the Boot MAC address . Set the Baseboard Management Console (BMC) Address . Enter the user credentials for the host's baseboard management controller (BMC). Select to power on the host after creation, and select Create . Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machine replicas in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 10.2.2. Adding a bare metal host to the cluster using YAML in the web console You can add bare metal hosts to the cluster in the web console using a YAML file that describes the bare metal host. Prerequisites Install a RHCOS compute machine on bare metal infrastructure for use in the cluster. Log in as a user with cluster-admin privileges. Create a Secret CR for the bare metal host. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New from YAML . Copy and paste the below YAML, modifying the relevant fields with the details of your host: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address> 1 credentialsName must reference a valid Secret CR. The baremetal-operator cannot manage the bare metal host without a valid Secret referenced in the credentialsName . For more information about secrets and how to create them, see Understanding secrets . 2 Setting disableCertificateVerification to true disables TLS host validation between the cluster and the baseboard management controller (BMC). Select Create to save the YAML and create the new bare metal host. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machines in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 10.2.3. Automatically scaling machines to the number of available bare metal hosts To automatically create the number of Machine objects that matches the number of available BareMetalHost objects, add a metal3.io/autoscale-to-hosts annotation to the MachineSet object. Prerequisites Install RHCOS bare metal compute machines for use in the cluster, and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to configure for automatic scaling by adding the metal3.io/autoscale-to-hosts annotation. Replace <machineset> with the name of the compute machine set. USD oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>' Wait for the new scaled machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. 10.2.4. Removing bare metal hosts from the provisioner node In certain circumstances, you might want to temporarily remove bare metal hosts from the provisioner node. For example, during provisioning when a bare metal host reboot is triggered by using the OpenShift Container Platform administration console or as a result of a Machine Config Pool update, OpenShift Container Platform logs into the integrated Dell Remote Access Controller (iDrac) and issues a delete of the job queue. To prevent the management of the number of Machine objects that matches the number of available BareMetalHost objects, add a baremetalhost.metal3.io/detached annotation to the MachineSet object. Note This annotation has an effect for only BareMetalHost objects that are in either Provisioned , ExternallyProvisioned or Ready/Available state. Prerequisites Install RHCOS bare metal compute machines for use in the cluster and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to remove from the provisioner node by adding the baremetalhost.metal3.io/detached annotation. USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached' Wait for the new machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. In the provisioning use case, remove the annotation after the reboot is complete by using the following command: USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-' Additional resources Expanding the cluster MachineHealthChecks on bare metal 10.2.5. Powering off bare-metal hosts You can power off bare-metal cluster hosts in the web console or by applying a patch in the cluster by using the OpenShift CLI ( oc ). Before you power off a host, you should mark the node as unschedulable and drain all pods and workloads from the node. Prerequisites You have installed a RHCOS compute machine on bare-metal infrastructure for use in the cluster. You have logged in as a user with cluster-admin privileges. You have configured the host to be managed and have added BMC credentials for the cluster host. You can add BMC credentials by applying a Secret custom resource (CR) in the cluster or by logging in to the web console and configuring the bare-metal host to be managed. Procedure In the web console, mark the node that you want to power off as unschedulable. Perform the following steps: Navigate to Nodes and select the node that you want to power off. Expand the Actions menu and select Mark as unschedulable . Manually delete or relocate running pods on the node by adjusting the pod deployments or scaling down workloads on the node to zero. Wait for the drain process to complete. Navigate to Compute Bare Metal Hosts . Expand the Options menu for the bare-metal host that you want to power off, and select Power Off . Select Immediate power off . Alternatively, you can patch the BareMetalHost resource for the host that you want to power off by using oc . Get the name of the managed bare-metal host. Run the following command: USD oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.provisioning.state}{"\n"}{end}' Example output master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed Mark the node as unschedulable: USD oc adm cordon <bare_metal_host> 1 1 <bare_metal_host> is the host that you want to shut down, for example, worker-2.example.com . Drain all pods on the node: USD oc adm drain <bare_metal_host> --force=true Pods that are backed by replication controllers are rescheduled to other available nodes in the cluster. Safely power off the bare-metal host. Run the following command: USD oc patch <bare_metal_host> --type json -p '[{"op": "replace", "path": "/spec/online", "value": false}]' After you power on the host, make the node schedulable for workloads. Run the following command: USD oc adm uncordon <bare_metal_host> | [
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'",
"master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed",
"oc adm cordon <bare_metal_host> 1",
"oc adm drain <bare_metal_host> --force=true",
"oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'",
"oc adm uncordon <bare_metal_host>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/managing-bare-metal-hosts |
Appendix A. Google Cloud Storage configuration | Appendix A. Google Cloud Storage configuration To configure the Block Storage service (cinder) to use Google Cloud Storage as a backup back end, complete the following procedures: Create and download the service account credentials of your Google account: Section A.1, "Creating the GCS credentials file" Section A.2, "Creating cinder-backup-gcs.yaml " Create an environment file to map the Block Storage settings that you require: Section A.3, "Creating the environment file with your Google Cloud settings" Re-deploy the overcloud with the environment file that you created: Section A.4, "Deploying the overcloud" Prerequisites You have the username and password of an account with elevated privileges. You can use the stack user account that is created to deploy the overcloud. For more information, see the Director Installation and Usage guide. You have a Google account with access to Google Cloud Platform. The Block Storage service uses this account to access and use Google Cloud to store backups. A.1. Creating the GCS credentials file The Block Storage service (cinder) requires your Google credentials to access and use Google Cloud for backups. You can provide these credentials to the Block Storage service by creating a service account key. Procedure Log in to the Google developer console ( http://console.developers.google.com ) with your Google account. Click the Credentials tab and select Service account key from the Create credentials drop-down menu. In the Create service account key screen, select the service account that you want the Block Storage service to use from the Service account drop-down menu: In the same screen, select JSON from the Key type section and click Create . The browser will then download the key to its default download location: Open the file and note the value of the project_id parameter: Save a copy of the GCS JSON credentials to /home/stack/templates/Cloud-Backup.json Important Name the file Cloud-Backup.json and do not change the file name. This JSON file must be in the same directory location as the cinder-backup-gcs.yaml file that you create as part of the procedure in Section A.2, "Creating cinder-backup-gcs.yaml " . A.2. Creating cinder-backup-gcs.yaml Using the example file provided, create the cinder-backup-gcs.yaml file. Note The white space and format used in this the example (and in your file) are critical. If the white space is changed, then the file might not function as expected. Procedure Copy the text below, paste it into the new file. Do not make any modifications to the file contents. Save the file as /home/stack/templates/cinder-backup-gcs.yaml . A.3. Creating the environment file with your Google Cloud settings Create the environment file to contain the settings that you want to apply to the Block Storage service (cinder). In this case, the environment file configures the Block Storage service to store volume backups to Google Cloud. For more information about environment files, see the Director Installation and Usage guide. Use the following example environment file and update the backup_gcs_project_id with the project ID that is listed in the Cloud-Backup.json file. You can also change the backup_gcs_bucket_location location from US to location that is closer to your location. For a list of configuration options for the Google Cloud Backup Storage backup back end, see Table A.1, "Google Cloud Storage backup back end configuration options" . Procedure Copy the environment file example below. Retain the white space usage. Paste the content into a new file: /home/stack/templates/cinder-backup-settings.yaml Change the value for backup_gcs_project_id from cloud-backup-1370 to the project ID listed in the Cloud-Backup.json file. Save the file. Environment file example Define each setting in the environment file. Use Table A.1, "Google Cloud Storage backup back end configuration options" to select the available configuration options. Table A.1. Google Cloud Storage backup back end configuration options PARAM Default CONFIG Description backup_gcs_project_id Required. The project ID of the service account that you are using and that is included in the project_id of the service account key from Section A.1, "Creating the GCS credentials file" . backup_gcs_credential_file The absolute path to the service account key file that you created in Section A.1, "Creating the GCS credentials file" . backup_gcs_bucket The GCS bucket, or object storage repository, that you want to use, which might or might not exist. If you specify a non-existent bucket, the Google Cloud Storage backup driver creates one and assigns it the name that you specify here. For more information, see Buckets and Bucket name requirements . backup_gcs_bucket_location us The location of the GCS bucket. This value is used only if you specify a non-existent bucket in backup_gcs_bucket ; in which case, the Google Cloud Storage backup driver specifies this as the GCS bucket location. backup_gcs_object_size 52428800 The size, in bytes, of GCS backup objects. backup_gcs_block_size 32768 The size, in bytes, that changes are tracked for incremental backups. This value must be a multiple of the backup_gcs_object_size value. backup_gcs_user_agent gcscinder The HTTP user-agent string for the GCS API. backup_gcs_reader_chunk_size 2097152 GCS objects are downloaded in chunks of this size, in bytes. backup_gcs_writer_chunk_size 2097152 GCS objects are uploaded in chunks of this size, in bytes. To upload files as a single chunk instead, use the value -1. backup_gcs_num_retries 3 Number of retries to attempt. backup_gcs_storage_class NEARLINE Storage class of the GCS bucket. This value is used only if you specify a non-existent bucket in backup_gcs_bucket ; in which case, the Google Cloud Storage backup driver specifies this as the GCS bucket storage class. For more information, see Storage Classes . backup_gcs_retry_error_codes 429 List of GCS error codes. backup_gcs_enable_progress_timer True Boolean to enable or disable the timer for sending periodic progress notifications to the Telemetry service (ceilometer) during volume backups. This is enabled by default (True). Warning When you create new buckets, Google Cloud Storage charges based on the storage class that you choose ( backup_gcs_storage_class ). The default NEARLINE class is appropriate for backup services. Warning You cannot edit the location or class of a bucket after you create it. For more information, see Managing a bucket's storage class or location . A.4. Deploying the overcloud When you have created the environment file file in /home/stack/templates/ , deploy the overcloud then restart the cinder-backup service: Procedure Log in as the stack user. Deploy the configuration: Important If you passed any extra environment files when you created the overcloud, pass them again here using the -e option to avoid making undesired changes to the overcloud. Restart the cinder-backup service after the deployment finishes. For more information, see the Including Environment Files in Overcloud Creation in the Director Installation and Usage Guide and the Environment Files section of the Advanced Overcloud Customization Guide . | [
"{ \"type\": \"service_account\", \"project_id\": \"*cloud-backup-1370*\",",
"heat_template_version: rocky description: > Post-deployment for configuration cinder-backup to GCS parameters: servers: type: json DeployIdentifier: type: string resources: CinderBackupGcsExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/bash GCS_FILE=/var/lib/config-data/puppet-generated/cinder/etc/cinder/Cloud-Backup.json HOSTNAME=USD(hostname -s) for NODE in USD(hiera -c /etc/puppet/hiera.yaml cinder_backup_short_node_names | tr -d '[]\",'); do if [ USDNODE == USDHOSTNAME ]; then cat <<EOF > USDGCS_FILE GCS_JSON_DATA EOF chmod 0640 USDGCS_FILE chown root:42407 USDGCS_FILE fi done params: GCS_JSON_DATA: {get_file: Cloud-Backup.json} CinderBackupGcsDeployment: type: OS::Heat::SoftwareDeploymentGroup properties: servers: {get_param: servers} config: {get_resource: CinderBackupGcsExtraConfig} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier}",
"resource_registry: OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml # For non-pcmk managed implementation # OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-container-puppet.yaml OS::TripleO::NodeExtraConfigPost: /home/stack/templates/cinder-backup-gcs.yaml parameter_defaults: CinderBackupBackend: swift ExtraConfig: cinder::backup::swift::backup_driver: cinder.backup.drivers.gcs.GoogleBackupDriver cinder::config::cinder_config: DEFAULT/backup_gcs_credential_file: value: /etc/cinder/Cloud-Backup.json DEFAULT/backup_gcs_project_id: value: cloud-backup-1370 DEFAULT/backup_gcs_bucket: value: cinder-backup-gcs DEFAULT/backup_gcs_bucket_location: value: us",
"openstack overcloud deploy --templates -e /home/stack/templates/cinder-backup-settings.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/block_storage_backup_guide/google-cloud-storage |
Chapter 114. Scheduler | Chapter 114. Scheduler Only consumer is supported The Scheduler component is used to generate message exchanges when a scheduler fires. This component is similar to the Timer component, but it offers more functionality in terms of scheduling. Also this component uses JDK ScheduledExecutorService . Where as the timer uses a JDK Timer . You can only consume events from this endpoint. 114.1. Dependencies When using scheduler with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency> 114.2. URI format Where name is the name of the scheduler, which is created and shared across endpoints. So if you use the same name for all your scheduler endpoints, only one scheduler thread pool and thread will be used - but you can configure the thread pool to allow more concurrent threads. Note The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . 114.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 114.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 114.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 114.4. Component Options The Scheduler component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int 114.5. Endpoint Options The Scheduler endpoint is configured using URI syntax: with the following path and query parameters: 114.5.1. Path Parameters (1 parameters) Name Description Default Type name (consumer) Required The name of the scheduler. String 114.5.2. Query Parameters (21 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long poolSize (scheduler) Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 int repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 114.6. More information This component is a scheduler Polling Consumer where you can find more information about the options above, and examples at the Polling Consumer page. 114.7. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. 114.8. Sample To set up a route that generates an event every 60 seconds: from("scheduler://foo?delay=60000").to("bean:myBean?method=someMethodName"); The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry such as JNDI or Spring. And the route in Spring DSL: <route> <from uri="scheduler://foo?delay=60000"/> <to uri="bean:myBean?method=someMethodName"/> </route> 114.9. Forcing the scheduler to trigger immediately when completed To let the scheduler trigger as soon as the task is complete, you can set the option greedy=true . But beware then the scheduler will keep firing all the time. So use this with caution. 114.10. Forcing the scheduler to be idle There can be use cases where you want the scheduler to trigger and be greedy. But sometimes you want "tell the scheduler" that there was no task to poll, so the scheduler can change into idle mode using the backoff options. To do this you would need to set a property on the exchange with the key Exchange.SCHEDULER_POLLED_MESSAGES to a boolean value of false. This will cause the consumer to indicate that there was no messages polled. The consumer will otherwise as by default return 1 message polled to the scheduler, every time the consumer has completed processing the exchange. 114.11. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.scheduler.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.scheduler.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.scheduler.enabled Whether to enable auto configuration of the scheduler component. This is enabled by default. Boolean camel.component.scheduler.pool-size Number of core threads in the thread pool used by the scheduling thread pool. Is by default using a single thread. 1 Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-scheduler-starter</artifactId> </dependency>",
"scheduler:name[?options]",
"scheduler:name",
"from(\"scheduler://foo?delay=60000\").to(\"bean:myBean?method=someMethodName\");",
"<route> <from uri=\"scheduler://foo?delay=60000\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-scheduler-component-starter |
20.16.9.6. Direct attachment to physical interfaces | 20.16.9.6. Direct attachment to physical interfaces Using <interface type='direct'> attaches a virtual machine's NIC to a specified physical interface on the host. This set up requires the Linux macvtap driver to be available. One of the following modes can be chosen for the operation mode of the macvtap device: vepa ( 'Virtual Ethernet Port Aggregator'), which is the default mode, bridge or private . To set up direct attachment to physical interface, use the following parameters in the domain XML: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices> ... Figure 20.41. Devices - network interfaces- direct attachment to physical interfaces The individual modes cause the delivery of packets to behave as shown in Table 20.17, "Direct attachment to physical interface elements" : Table 20.17. Direct attachment to physical interface elements Element Description vepa All of the guest virtual machines' packets are sent to the external bridge. Packets whose destination is a guest virtual machine on the same host physical machine as where the packet originates from are sent back to the host physical machine by the VEPA capable bridge (today's bridges are typically not VEPA capable). bridge Packets whose destination is on the same host physical machine as where they originate from are directly delivered to the target macvtap device. Both origin and destination devices need to be in bridge mode for direct delivery. If either one of them is in vepa mode, a VEPA capable bridge is required. private All packets are sent to the external bridge and will only be delivered to a target VM on the same host physical machine if they are sent through an external router or gateway and that device sends them back to the host physical machine. This procedure is followed if either the source or destination device is in private mode. passthrough This feature attaches a virtual function of a SRIOV capable NIC directly to a guest virtual machine without losing the migration capability. All packets are sent to the VF/IF of the configured network device. Depending on the capabilities of the device additional prerequisites or limitations may apply; for example, this requires kernel 2.6.38 or newer. The network access of direct attached virtual machines can be managed by the hardware switch to which the physical interface of the host physical machine machine is connected to. The interface can have additional parameters as shown below, if the switch is conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID. Additional elements that can be manipulated are described in Table 20.18, "Direct attachment to physical interface additional elements" : Table 20.18. Direct attachment to physical interface additional elements Element Description managerid The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved. typeid The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value. typeidversion The VSI Type Version allows multiple versions of a VSI Type. This is an integer value. instanceid The VSI Instance ID Identifier is generated when a VSI instance (that is a virtual interface of a virtual machine) is created. This is a globally unique identifier. profileid The profile ID contains the name of the port profile that is to be applied onto this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type="802.1Qbg"> <parameters managerid="11" typeid="1193047" typeidversion="2" instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/> </virtualport> </interface> </devices> ... Figure 20.42. Devices - network interfaces- direct attachment to physical interfaces additional parameters The interface can have additional parameters as shown below if the switch is conforming to the IEEE 802.1Qbh standard. The values are network specific and should be provided by the network administrator. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 20.43. Devices - network interfaces- direct attachment to physical interfaces more additional parameters The profileid attribute, contains the name of the port profile that is to be applied to this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. | [
"<devices> <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type=\"802.1Qbg\"> <parameters managerid=\"11\" typeid=\"1193047\" typeidversion=\"2\" instanceid=\"09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f\"/> </virtualport> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-Network-interfaces-direct-attachment-to-physical-device |
8.60. flex | 8.60. flex 8.60.1. RHBA-2014:1402 - flex bug fix update Updated flex packages that fix one bug are now available for Red Hat Enterprise Linux 6. The flex packages provide a utility for generating scanners. The scanners are programs that can recognize lexical patterns in text. Bug Fix BZ# 570661 Previously, the flex static libraries for 32-bit and 64-bit architectures were included in the same package. Consequently, an attempt to compile an i386 code on an x86_64 system failed unless the 64-bit version of the flex utility had been removed. With this update, the libraries have been moved to separate packages, and flex works as expected in the described scenario. Users of flex are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/flex |
10.3. Installing Drivers during the Windows Installation | 10.3. Installing Drivers during the Windows Installation This procedure covers installing the virtio drivers during a Windows installation. This method allows a Windows guest virtual machine to use the virtio drivers for the default storage device. Procedure 10.3. Installing virtio drivers during the Windows installation Install the virtio-win package Use the following command to install the virtio-win package: Create the guest virtual machine Important Create the virtual machine, as normal, without starting the virtual machine. Follow one of the procedures below. Select one of the following guest-creation methods, and follow the instructions. Create the guest virtual machine with virsh This method attaches the virtio driver floppy disk to a Windows guest before the installation. If the virtual machine is created from an XML definition file with virsh , use the virsh define command not the virsh create command. Create, but do not start, the virtual machine. Refer to the Red Hat Enterprise Linux Virtualization Administration Guide for details on creating virtual machines with the virsh command. Add the driver disk as a virtualized floppy disk with the virsh command. This example can be copied and used if there are no other virtualized floppy devices attached to the guest virtual machine. Note that vm_name should be replaced with the name of the virtual machine. You can now continue with Step 3 . Create the guest virtual machine with virt-manager and changing the disk type At the final step of the virt-manager guest creation wizard, check the Customize configuration before install check box. Figure 10.16. The virt-manager guest creation wizard Click on the Finish button to continue. Open the Add Hardware wizard Click the Add Hardware button in the bottom left of the new panel. Select storage device Storage is the default selection in the Hardware type list. Figure 10.17. The Add new virtual hardware wizard Ensure the Select managed or other existing storage radio button is selected. Click Browse... . Figure 10.18. Select managed or existing storage In the new window that opens, click Browse Local . Navigate to /usr/share/virtio-win/virtio-win.vfd , and click Select to confirm. Change Device type to Floppy disk , and click Finish to continue. Figure 10.19. Change the Device type Confirm settings Review the device settings. Figure 10.20. The virtual machine hardware information window You have now created a removable device accessible by your virtual machine. Change the hard disk type To change the hard disk type from IDE Disk to Virtio Disk , we must first remove the existing hard disk, Disk 1. Select the disk and click on the Remove button. Figure 10.21. The virtual machine hardware information window Add a new virtual storage device by clicking Add Hardware . Then, change the Device type from IDE disk to Virtio Disk . Click Finish to confirm the operation. Figure 10.22. The virtual machine hardware information window Ensure settings are correct Review the settings for VirtIO Disk 1 . Figure 10.23. The virtual machine hardware information window When you are satisfied with the configuration details, click the Begin Installation button. You can now continue with Step 3 . Create the guest virtual machine with virt-install Append the following parameter exactly as listed below to add the driver disk to the installation with the virt-install command: Important If the device you wish to add is a disk (that is, not a floppy or a cdrom ), you will also need to add the bus=virtio option to the end of the --disk parameter, like so: According to the version of Windows you are installing, append one of the following options to the virt-install command: You can now continue with Step 3 . Additional steps for driver installation During the installation, additional steps are required to install drivers, depending on the type of Windows guest. Windows Server 2003 Before the installation blue screen repeatedly press F6 for third party drivers. Figure 10.24. The Windows Setup screen Press S to install additional device drivers. Figure 10.25. The Windows Setup screen Figure 10.26. The Windows Setup screen Press Enter to continue the installation. Windows Server 2008 Follow the same procedure for Windows Server 2003, but when the installer prompts you for the driver, click on Load Driver , point the installer to Drive A: and pick the driver that suits your guest operating system and architecture. | [
"yum install virtio-win",
"virsh attach-disk vm_name /usr/share/virtio-win/virtio-win.vfd fda --type floppy",
"--disk path=/usr/share/virtio-win/virtio-win.vfd,device=floppy",
"--disk path=/usr/share/virtio-win/virtio-win.vfd,device=disk,bus=virtio",
"--os-variant win2k3",
"--os-variant win7"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/form-virtualization_host_configuration_and_guest_installation_guide-para_virtualized_drivers-installing_with_a_virtualized_floppy_disk |
Installing on Alibaba Cloud | Installing on Alibaba Cloud OpenShift Container Platform 4.16 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team | [
"grep -e lm -e svm -e vmx /proc/cpuinfo",
"sudo dnf install -y qemu-img",
"qemu-img convert -O qcow2 USD{CLUSTER_NAME}.iso USD{CLUSTER_NAME}.qcow2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_alibaba_cloud/index |
Chapter 4. Using Kafka in KRaft mode | Chapter 4. Using Kafka in KRaft mode KRaft (Kafka Raft metadata) mode replaces Kafka's dependency on ZooKeeper for cluster management. KRaft mode simplifies the deployment and management of Kafka clusters by bringing metadata management and coordination of clusters into Kafka. Kafka in KRaft mode is designed to offer enhanced reliability, scalability, and throughput. Metadata operations become more efficient as they are directly integrated. And by removing the need to maintain a ZooKeeper cluster, there's also a reduction in the operational and security overhead. Through Kafka configuration, nodes are assigned the role of broker, controller, or both: Controller nodes operate in the control plane to manage cluster metadata and the state of the cluster using a Raft-based consensus protocol. Broker nodes operate in the data plane to manage the streaming of messages, receiving and storing data in topic partitions. Dual-role nodes fulfill the responsibilities of controllers and brokers. You can use a dynamic or static quorum of controllers. Dynamic is recommended as it supports dynamic scaling . Controllers use a metadata log, stored as a single-partition topic ( __cluster_metadata ) on every node, which records the state of the cluster. When requests are made to change the cluster configuration, an active (lead) controller manages updates to the metadata log, and follower controllers replicate these updates. The metadata log stores information on brokers, replicas, topics, and partitions, including the state of in-sync replicas and partition leadership. Kafka uses this metadata to coordinate changes and manage the cluster effectively. Broker nodes act as observers, storing the metadata log passively to stay up-to-date with the cluster's state. Each node fetches updates to the log independently. If you are using JBOD storage, you can change the directory that stores the metadata log . Note The KRaft metadata version used in the Kafka cluster must be supported by the Kafka version in use. In the following example, a Kafka cluster comprises a quorum of controller and broker nodes for fault tolerance and high availability. Figure 4.1. Example cluster with separate broker and controller nodes In a typical production environment, use dedicated broker and controller nodes. However, you might want to use nodes in a dual-role configuration for development or testing. You can use a combination of nodes that combine roles with nodes that perform a single role. In the following example, three nodes perform a dual role and two nodes act only as brokers. Figure 4.2. Example cluster with dual-role nodes and dedicated broker nodes 4.1. Migrating to KRaft mode If you are using ZooKeeper for metadata management of your Kafka cluster, you can migrate to using Kafka in KRaft mode. KRaft mode replaces ZooKeeper for distributed coordination, offering enhanced reliability, scalability, and throughput. To migrate your cluster, do as follows: Install a quorum of controller nodes to replace ZooKeeper for cluster management. Enable KRaft migration in the controller configuration by setting the zookeeper.metadata.migration.enable property to true . Start the controllers and enable KRaft migration on the current cluster brokers using the same configuration property. Perform a rolling restart of the brokers to apply the configuration changes. When migration is complete, switch the brokers to KRaft mode and disable migration on the controllers. Important Once KRaft mode has been finalized, rollback to ZooKeeper is not possible. Carefully consider this before proceeding with the migration. Before starting the migration, verify that your environment can support Kafka in KRaft mode: Migration is only supported on dedicated controller nodes, not on nodes with dual roles as brokers and controllers. Throughout the migration process, ZooKeeper and KRaft controller nodes operate in parallel, requiring sufficient compute resources in your cluster. Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. Streams for Apache Kafka is installed on each host , and the configuration files are available. You are using Streams for Apache Kafka 2.7 or newer with Kafka 3.7.0 or newer. If you are using an earlier version of Streams for Apache Kafka, upgrade before migrating to KRaft mode. Logging is enabled to check the migration process. Set DEBUG level in log4j.properties for the root logger on the controllers and brokers in the cluster. For detailed migration-specific logs, set TRACE for the migration logger: Controller logging configuration log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE Procedure Retrieve the cluster ID of your Kafka cluster. Use the zookeeper-shell tool: ./bin/zookeeper-shell.sh localhost:2181 get /cluster/id The command returns the cluster ID. Install a KRaft controller quorum to the cluster. Configure a controller node on each host using the controller.properties file. At a minimum, each controller requires the following configuration: A unique node ID The migration enabled flag set to true ZooKeeper connection details Listener name used by the controller quorum A quorum of controllers (dynamic is recommended) Listener name for inter-broker communication Example controller configuration process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 inter.broker.listener.name=PLAINTEXT The format for the controller quorum is <node_id>@<hostname>:<port> in a comma-separated list. The inter-broker listener name is required for the KRaft controller to initiate the migration. Set up log directories for each controller node: ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/controller.properties Returns: Formatting /tmp/kraft-controller-logs Replace <uuid> with the cluster ID you retrieved. Use the same cluster ID for each controller node in your cluster. By default, the log directory ( log.dirs ) specified in the controller.properties configuration file is set to /tmp/kraft-controller-logs . The /tmp directory is typically cleared on each system reboot, making it suitable for development environments only. Set multiple log directories using a comma-separated list, if needed. Start each controller. ./bin/kafka-server-start.sh -daemon ./config/kraft/controller.properties Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka ./config/kraft/controller.properties Check the logs of each controller to ensure that they have successfully joined the KRaft cluster: tail -f ./logs/controller.log Enable migration on each broker. If running, stop the Kafka broker running on the host. ./bin/kafka-server-stop.sh jcmd | grep kafka If using a multi-node cluster, refer to Section 3.7, "Performing a graceful rolling restart of Kafka brokers" . Enable migration using the server.properties file. At a minimum, each broker requires the following additional configuration: Inter-broker protocol version set to version 3.9 The migration enabled flag Controller configuration that matches the controller nodes A quorum of controllers Example broker configuration broker.id=0 inter.broker.protocol.version=3.9 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 The ZooKeeper connection details should already be present. Restart the updated broker: ./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties The migration starts automatically and can take some time depending on the number of topics and partitions in the cluster. Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka ./config/kraft/server.properties Check the log on the active controller to confirm that the migration is complete: ./bin/zookeeper-shell.sh localhost:2181 get /controller Look for an INFO log entry that says the following: Completed migration of metadata from ZooKeeper to KRaft. Switch each broker to KRaft mode. Stop the broker, as before. Update the broker configuration in the server.properties file: Replace the broker.id with a node.id using the same ID Add a broker KRaft role for the broker Remove the inter-broker protocol version ( inter.broker.protocol.version ) Remove the migration enabled flag ( zookeeper.metadata.migration.enable ) Remove ZooKeeper configuration Remove the listener for controller and broker communication ( control.plane.listener.name ) Example broker configuration for KRaft node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 If you are using ACLS in your broker configuration, update the authorizer using the authorizer.class.name property to the KRaft-based standard authorizer. ZooKeeper-based brokers use authorizer.class.name=kafka.security.authorizer.AclAuthorizer . When migrating to KRaft-based brokers, specify authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer . Restart the broker, as before. Switch each controller out of migration mode. Stop the controller in the same way as the broker, as described previously. Update the controller configuration in the controller.properties file: Remove the ZooKeeper connection details Remove the zookeeper.metadata.migration.enable property Remove inter.broker.listener.name Example controller configuration following migration process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 Restart the controller in the same way as the broker, as described previously. | [
"log4j.rootLogger=DEBUG log4j.logger.org.apache.kafka.metadata.migration=TRACE",
"./bin/zookeeper-shell.sh localhost:2181 get /cluster/id",
"process.roles=controller node.id=1 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 inter.broker.listener.name=PLAINTEXT",
"./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/controller.properties",
"Formatting /tmp/kraft-controller-logs",
"./bin/kafka-server-start.sh -daemon ./config/kraft/controller.properties",
"jcmd | grep kafka",
"process ID kafka.Kafka ./config/kraft/controller.properties",
"tail -f ./logs/controller.log",
"./bin/kafka-server-stop.sh jcmd | grep kafka",
"broker.id=0 inter.broker.protocol.version=3.9 zookeeper.metadata.migration.enable=true zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090",
"./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties",
"jcmd | grep kafka",
"process ID kafka.Kafka ./config/kraft/server.properties",
"./bin/zookeeper-shell.sh localhost:2181 get /controller",
"node.id=0 process.roles=broker listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090",
"process.roles=controller node.id=1 listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-kraft-mode-str |
Chapter 46. Networking | Chapter 46. Networking Cisco usNIC driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. The libusnic_verbs driver, which is available as a Technology Preview, makes it possible to use usNIC devices via standard InfiniBand RDMA programming based on the Verbs API. (BZ#916384) Cisco VIC kernel driver The Cisco VIC Infiniband kernel driver, which is available as a Technology Preview, allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. (BZ#916382) Trusted Network Connect Trusted Network Connect, available as a Technology Preview, is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. (BZ#755087) SR-IOV functionality in the qlcnic driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. (BZ#1259547) The libnftnl and nftables packages The nftables and libnftl packages are available as a Technology Preview since Red Hat Enterprise Linux 7.3. The nftables packages provide a packet-filtering tool, with numerous improvements in convenience, features, and performance over packet-filtering tools. It is the designated successor to the iptables , ip6tables , arptables , and ebtables utilities. The libnftnl packages provide a library for low-level interaction with nftables Netlink's API over the libmnl library. (BZ#1332585) The flower classifier with off-loading support flower is a Traffic Control (TC) classifier intended to allow users to configure matching on well-known packet fields for various protocols. It is intended to make it easier to configure rules over the u32 classifier for complex filtering and classification tasks. flower also supports the ability to off-load classification and action rules to underlying hardware if the hardware supports it. The flower TC classifier is now provided as a Technology Preview. (BZ#1393375) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_networking |
4.3.4. Scripting the v2v process | 4.3.4. Scripting the v2v process The entire v2v process can be scripted, enabling the automated batch processing of a large number of virtual machines. The process is broken up into two steps, which must be run on separate hosts. Procedure 4.8. Scripting the v2v process Use virt-v2v to convert the virtual machines and copy them to the export storage domain. This step must be run on a Linux host. The process is detailed in Section 4.3.2, "Converting a virtual machine" . Once the conversion is complete, use the Red Hat Enterprise Virtualization Administration Portal to import the virtual machines from the export storage domain. This step must be run on the Red Hat Enterprise Virtualization Manager server. For more information on importing a virtual machine with the Red Hat Enterprise Virtualization Administration Portal, see the Red Hat Enterprise Virtualization Administration Guide . Figure 4.4. Importing a virtual machine with the Red Hat Enterprise Virtualization Administration Portal Alternatively, the Python SDK or the command line can also be used to import the virtual machines from the export storage domain: To import the virtual machines using the SDK, use the following: Example 4.3. Importing virtual machines from the export storage domain using the SDK Note When using the SDK method, entities can also be fetched and passed using name= . To import the virtual machines using the command line, connect to the Red Hat Enterprise Virtualization Manager shell and use the following command: Example 4.4. Importing virtual machines from the export storage domain using the command line Note When using the command line method, entities can also be fetched and passed using -name . | [
"api = API(url=\"http(s)://...:.../api\", username=\"...\", password=\"...\", filter=False, debug=True) sd = api.storagedomains.get(id=\"from-sd-id\") import_candidate = sd.vms.get(id=\"vm-to-import\") import_candidate.import_vm(action=params.Action( cluster=api.clusters.get(id=\"to-cluster-id\"), storage_domain=api.storagedomains.get(id=\"to-sd-id\")))",
"action vm \"vm-to-import\" import_vm --storagedomain-identifier \"from-sd-id\" --cluster-id \"to-cluster-id\" --storage_domain-id \"to-sd-id\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-scripting_the_v2v_process |
OpenShift Dedicated clusters on GCP | OpenShift Dedicated clusters on GCP OpenShift Dedicated 4 Installing OpenShift Dedicated clusters on GCP Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/openshift_dedicated_clusters_on_gcp/index |
E.2. A Deep-dive into IOMMU Groups | E.2. A Deep-dive into IOMMU Groups An IOMMU group is defined as the smallest set of devices that can be considered isolated from the IOMMU's perspective. The first step to achieve isolation is granularity. If the IOMMU cannot differentiate devices into separate IOVA spaces, they are not isolated. For example, if multiple devices attempt to alias to the same IOVA space, the IOMMU is not able to distinguish between them. This is the reason why a typical x86 PC will group all conventional-PCI devices together, with all of them aliased to the same requester ID, the PCIe-to-PCI bridge. Legacy KVM device assignment allows a user to assign these conventional-PCI devices separately, but the configuration fails because the IOMMU cannot distinguish between the devices. As VFIO is governed by IOMMU groups, it prevents any configuration that violates this most basic requirement of IOMMU granularity. The step is to determine whether the transactions from the device actually reach the IOMMU. The PCIe specification allows for transactions to be re-routed within the interconnect fabric. A PCIe downstream port can re-route a transaction from one downstream device to another. The downstream ports of a PCIe switch may be interconnected to allow re-routing from one port to another. Even within a multifunction endpoint device, a transaction from one function may be delivered directly to another function. These transactions from one device to another are called peer-to-peer transactions and can destroy the isolation of devices operating in separate IOVA spaces. Imagine for instance, if the network interface card assigned to a guest virtual machine, attempts a DMA write operation to a virtual address within its own IOVA space. However in the physical space, that same address belongs to a peer disk controller owned by the host. As the IOVA to physical translation for the device is only performed at the IOMMU, any interconnect attempting to optimize the data path of that transaction could mistakenly redirect the DMA write operation to the disk controller before it gets to the IOMMU for translation. To solve this problem, the PCI Express specification includes support for PCIe Access Control Services (ACS), which provides visibility and control of these redirects. This is an essential component for isolating devices from one another, which is often missing in interconnects and multifunction endpoints. Without ACS support at every level from the device to the IOMMU, it must be assumed that redirection is possible. This will, therefore, break the isolation of all devices below the point lacking ACS support in the PCI topology. IOMMU groups in a PCI environment take this isolation into account, grouping together devices which are capable of untranslated peer-to-peer DMA. In summary, the IOMMU group represents the smallest set of devices for which the IOMMU has visibility and which is isolated from other groups. VFIO uses this information to enforce safe ownership of devices for user space. With the exception of bridges, root ports, and switches (all examples of interconnect fabric), all devices within an IOMMU group must be bound to a VFIO device driver or known safe stub driver. For PCI, these drivers are vfio-pci and pci-stub. pci-stub is allowed simply because it is known that the host does not interact with devices via this driver [2] . If an error occurs indicating the group is not viable when using VFIO, it means that all of the devices in the group need to be bound to an appropriate host driver. Using virsh nodedev-dumpxml to explore the composition of an IOMMU group and virsh nodedev-detach to bind devices to VFIO compatible drivers, will help resolve such problems. [2] The exception is legacy KVM device assignment, which often interacts with the device while bound to the pci-stub driver. Red Hat Enterprise Linux 7 does not include legacy KVM device assignment, avoiding this interaction and potential conflict. Therefore, mixing the use of VFIO and legacy KVM device assignment within the same IOMMU group is not recommended. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-iommu-deep-dive |
Command Line Interface Reference | Command Line Interface Reference Red Hat OpenStack Platform 17.0 Command-line clients for Red Hat OpenStack Platform OpenStack Documentation Team [email protected] Abstract A reference to the commands available to the unified OpenStack command-line client. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/index |
24.5. Setting Up a Host Logging Server | 24.5. Setting Up a Host Logging Server Hosts generate and update log files, recording their actions and problems. Collecting these log files centrally simplifies debugging. This procedure should be used on your centralized log server. You could use a separate logging server, or use this procedure to enable host logging on the Red Hat Virtualization Manager. Setting up a Host Logging Server Check to see if the firewall allows traffic on the UDP 514 port, and is open to syslog service traffic: If the output is no , allow traffic on the UDP 514 port with: Create a new .conf file on the syslog server, for example, /etc/rsyslog.d/from_remote.conf , and add the following lines: Restart the rsyslog service: Log in to the hypervisor, and in the /etc/rsyslog.conf add the following line: Restart the rsyslog service on the hypervisor. Your centralized log server is now configured to receive and store the messages and secure logs from your virtualization hosts. | [
"firewall-cmd --query-service=syslog",
"firewall-cmd --add-service=syslog --permanent firewall-cmd --reload",
"template(name=\"DynFile\" type=\"string\" string=\"/var/log/%HOSTNAME%/%PROGRAMNAME%.log\") RuleSet(name=\"RemoteMachine\"){ action(type=\"omfile\" dynaFile=\"DynFile\") } Module(load=\"imudp\") Input(type=\"imudp\" port=\"514\" ruleset=\"RemoteMachine\")",
"systemctl restart rsyslog.service",
"*.info;mail.none;authpriv.none;cron.none @<syslog-FQDN>:514",
"systemctl restart rsyslog.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/setting_up_a_host_logging_server |
Chapter 9. Deprecated features | Chapter 9. Deprecated features Deprecated features that were supported in releases of Streams for Apache Kafka. 9.1. Streams for Apache Kafka 9.1.1. Schema property deprecations Schema Deprecated property Replacement property AclRule operation operation CruiseControlSpec tlsSidecar - CruiseControlTemplate tlsSidecarContainer - CruiseControlSpec.BrokerCapacity disk - CruiseControlSpec.BrokerCapacity cpuUtilization - EntityOperatorSpec tlsSidecar - EntityTopicOperatorSpec reconciliationIntervalSeconds reconciliationIntervalMs EntityTopicOperatorSpec zookeeperSessionTimeoutSeconds - EntityTopicOperatorSpec topicMetadataMaxAttempts - EntityUserOperator zookeeperSessionTimeoutSeconds - ExternalConfiguration env Replaced by template.connectContainer.env ExternalConfiguration volumes Replaced by template.pod.volumes and template.connectContainer.volumeMounts JaegerTracing type - KafkaConnectorSpec pause state KafkaConnectTemplate deployment Replaced by StrimziPodSet resource KafkaClusterTemplate statefulset Replaced by StrimziPodSet resource KafkaExporterTemplate service - KafkaMirrorMaker all properties - KafkaMirrorMaker2ConnectorSpec pause state KafkaMirrorMaker2MirrorSpec topicsBlacklistPattern topicsExcludePattern KafkaMirrorMaker2MirrorSpec groupsBlacklistPattern groupsExcludePattern ListenerStatus type name PersistentClaimStorage overrides - ZookeeperClusterTemplate statefulset Replaced by StrimziPodSet resource See the Streams for Apache Kafka Custom Resource API Reference . 9.1.2. Java 11 deprecated in Streams for Apache Kafka 2.7 Support for Java 11 is deprecated from Kafka 3.7.0 and Streams for Apache Kafka 2.7. Java 11 will be unsupported for all Streams for Apache Kafka components, including clients, in release 3.0. Streams for Apache Kafka supports Java 17. Use Java 17 when developing new applications. Plan to migrate any applications that currently use Java 11 to 17. If you want to continue using Java 11 for the time being, Streams for Apache Kafka 2.5 provides Long Term Support (LTS). For information on the LTS terms and dates, see the Streams for Apache Kafka LTS Support Policy . Note Support for Java 8 was removed in Streams for Apache Kafka 2.4.0. If you are currently using Java 8, plan to migrate to Java 17 in the same way. 9.1.3. Storage overrides The storage overrides ( *.storage.overrides ) for configuring per-broker storage are deprecated and will be removed in Streams for Apache Kafka 3.0. If you are using the storage overrides, migrate to KafkaNodePool resources and use multiple node pools with a different storage class each. For more information, see PersistentClaimStorage schema reference . 9.1.4. Environment variable configuration provider You can use configuration providers to load configuration data from external sources for all Kafka components, including producers and consumers. Previously, you could enable the io.strimzi.kafka.EnvVarConfigProvider environment variable configuration provider using the config.providers properties in the spec configuration of a component. However, this provider is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka's own environment variable configuration provider ( org.apache.kafka.common.config.provider.EnvVarConfigProvider ) to provide configuration properties as environment variables. Example configuration to enable the environment variable configuration provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider # ... 9.1.5. Kafka MirrorMaker 2 identity replication policy Identity replication policy is a feature used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the source cluster's name, the topic retains its original name. This setting is particularly useful for active/passive backups and data migration scenarios. To implement an identity replication policy, you must specify a replication policy class ( replication.policy.class ) in the MirrorMaker 2 configuration. Previously, you could specify the io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy class included with the Streams for Apache Kafka mirror-maker-2-extensions component. However, this component is now deprecated and will be removed in Streams for Apache Kafka 3.0. Therefore, it is recommended to update your implementation to use Kafka's own replication policy class ( org.apache.kafka.connect.mirror.IdentityReplicationPolicy ). For more information, see Configuring Kafka MirrorMaker 2 . 9.1.6. Kafka MirrorMaker 1 Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 was deprecated in Kafka 3.0 and will be removed in Streams for Apache Kafka 3.0, including the KafkaMirrorMaker custom resource, and Kafka 4.0.0. MirrorMaker 2 will be the only version available. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. To avoid disruptions, please transition to MirrorMaker 2 before support ends. If you're using MirrorMaker 1, you can replicate its functionality in MirrorMaker 2 by using the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy class.. By default, MirrorMaker 2 renames topics replicated to a target cluster, but IdentityReplicationPolicy preserves the original topic names, enabling the same active/passive unidirectional replication as MirrorMaker 1. For more information, see Configuring Kafka MirrorMaker 2 . 9.2. Kafka Bridge 9.2.1. OpenAPI v2 (Swagger) Support for OpenAPI v2 is now deprecated and will be removed in Streams for Apache Kafka 3.0. OpenAPI v3 is now supported. Plan to move to using OpenAPI v3. During the transition to using OpenAPI v2, the /openapi endpoint returns the OpenAPI v2 specification using an additional /openapi/v2 endpoint. A new /openapi/v3 endpoint returns the OpenAPI v3 specification. 9.2.2. Kafka Bridge span attributes The following Kafka Bridge span attributes are deprecated with replacements shown where applicable: http.method replaced by http.request.method http.url replaced by url.scheme , url.path , and url.query messaging.destination replaced by messaging.destination.name http.status_code replaced by http.response.status_code messaging.destination.kind=topic without replacement Kafka Bridge uses OpenTelemetry for distributed tracing. The changes are inline with changes to OpenTelemetry semantic conventions. The attributes will be removed in a future release of the Kafka Bridge | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: org.apache.kafka.common.config.provider.EnvVarConfigProvider #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/deprecated-features-str |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/preface |
B.78. qemu-kvm | B.78. qemu-kvm B.78.1. RHSA-2011:0345 - Moderate: qemu-kvm security update Updated qemu-kvm packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on AMD64 and Intel 64 systems. qemu-kvm is the user-space component for running virtual machines using KVM. Virtual Network Computing (VNC) is a remote display system. CVE-2011-0011 A flaw was found in the way the VNC "password" option was handled. Clearing a password disabled VNC authentication, allowing a remote user able to connect to the virtual machines' VNC ports to open a VNC session without authentication. All users of qemu-kvm should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing this update, shut down all running virtual machines. Once all virtual machines have shut down, start them again for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/qemu-kvm |
Chapter 20. Configuring System Purpose using the subscription-manager command-line tool | Chapter 20. Configuring System Purpose using the subscription-manager command-line tool System purpose is a feature of the Red Hat Enterprise Linux installation to help RHEL customers get the benefit of our subscription experience and services offered in the Red Hat Hybrid Cloud Console, a dashboard-based, Software-as-a-Service (SaaS) application that enables you to view subscription usage in your Red Hat account. You can configure system purpose attributes either on the activation keys or by using the subscription manager tool. Prerequisites You have installed and registered your Red Hat Enterprise Linux 8 system, but system purpose is not configured. You are logged in as a root user. Note In the entitlement mode, if your system is registered but has subscriptions that do not satisfy the required purpose, you can run the subscription-manager remove --all command to remove attached subscriptions. You can then use the command-line subscription-manager syspurpose {role, usage, service-level} tools to set the required purpose attributes, and lastly run subscription-manager attach --auto to re-entitle the system with considerations for the updated attributes. Whereas, in the SCA enabled account, you can directly update the system purpose details post registration without making an update to the subscriptions in the system. Procedure From a terminal window, run the following command to set the intended role of the system: Replace VALUE with the role that you want to assign: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node For example: Optional: Before setting a value, see the available roles supported by the subscriptions for your organization: Optional: Run the following command to unset the role: Run the following command to set the intended Service Level Agreement (SLA) of the system: Replace VALUE with the SLA that you want to assign: Premium Standard Self-Support For example: Optional: Before setting a value, see the available service-levels supported by the subscriptions for your organization: Optional: Run the following command to unset the SLA: Run the following command to set the intended usage of the system: Replace VALUE with the usage that you want to assign: Production Disaster Recovery Development/Test For example: Optional: Before setting a value, see the available usages supported by the subscriptions for your organization: Optional: Run the following command to unset the usage: Run the following command to show the current system purpose properties: Optional: For more detailed syntax information run the following command to access the subscription-manager man page and browse to the SYSPURPOSE OPTIONS: Verification To verify the system's subscription status in a system registered with an account having entitlement mode enabled: An overall status Current means that all of the installed products are covered by the subscription(s) attached and entitlements to access their content set repositories has been granted. A system purpose status Matched means that all of the system purpose attributes (role, usage, service-level) that were set on the system are satisfied by the subscription(s) attached. When the status information is not ideal, additional information is displayed to help the system administrator decide what corrections to make to the attached subscriptions to cover the installed products and intended system purpose. To verify the system's subscription status in a system registered with an account having SCA mode enabled: In SCA mode, subscriptions are no longer required to be attached to individual systems. Hence, both the overall status and system purpose status are displayed as Disabled . However, the technical, business, and operational use cases supplied by system purpose attributes are important to the subscriptions service. Without these attributes, the subscriptions service data is less accurate. Additional resources To learn more about the subscriptions service, see the Getting Started with the Subscriptions Service guide . | [
"subscription-manager syspurpose role --set \"VALUE\"",
"subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"",
"subscription-manager syspurpose role --list",
"subscription-manager syspurpose role --unset",
"subscription-manager syspurpose service-level --set \"VALUE\"",
"subscription-manager syspurpose service-level --set \"Standard\"",
"subscription-manager syspurpose service-level --list",
"subscription-manager syspurpose service-level --unset",
"subscription-manager syspurpose usage --set \"VALUE\"",
"subscription-manager syspurpose usage --set \"Production\"",
"subscription-manager syspurpose usage --list",
"subscription-manager syspurpose usage --unset",
"subscription-manager syspurpose --show",
"man subscription-manager",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/proc_configuring-system-purpose-using-the-subscription-manager-command-line-tool_rhel-installer |
Chapter 31. JmxPrometheusExporterMetrics schema reference | Chapter 31. JmxPrometheusExporterMetrics schema reference Used in: CruiseControlSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the JmxPrometheusExporterMetrics type from other subtypes which may be added in the future. It must have the value jmxPrometheusExporter for the type JmxPrometheusExporterMetrics . Property Property type Description type string Must be jmxPrometheusExporter . valueFrom ExternalConfigurationReference ConfigMap entry where the Prometheus JMX Exporter configuration is stored. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-jmxprometheusexportermetrics-reference |
Chapter 4. Adding packages during image creation by using Insights image builder | Chapter 4. Adding packages during image creation by using Insights image builder You can customize your images during the creation process by adding additional packages from the BaseOS and AppStream RHEL repositories, through the UI. With that, you do not need to install the desired packages on first boot, which can be error-prone. 4.1. Adding additional packages during the image creation When creating a customized image using Insights image builder, you can add additional packages from the BaseOS and AppStream repositories. For that, follow the steps: Prerequisites You have an account on Red Hat Customer Portal with an Insights subscription. Access to the Insights image builder dashboard. You have already completed the following steps: Image output Target cloud environment Optionally, Registration Procedure On the Packages page: Type the name of the package you want to add to your image in the Available packages search bar. Optionally, you can enter the first two letters of the package name to see the available package options. The packages are listed on the Available packages dual list box. Click the package or packages you want to add. Click the >> button to add all packages shown in the package search results to the Chosen packages dual list box. Optionally, you can click the > button to add only the selected packages. After you have finished adding the additional packages, click . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . After you complete the steps in the Create image wizard, the Image Builder dashboard is displayed. Insights image builder starts the compose of a RHEL image for the x86_64 architecture. The Insights image builder Images dashboard opens. You can see details such as the Image UUID, the cloud target environment, the image operating system release and the status of the image creation. Note The image build, upload and cloud registration processes can take up to ten minutes to complete. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/adding-packages-during-image-creation |
Chapter 5. Upgrading the Red Hat build of Keycloak Client Libraries | Chapter 5. Upgrading the Red Hat build of Keycloak Client Libraries The client libraries are those artifacts: Java admin client - Maven artifact org.keycloak:keycloak-admin-client Java authorization client - Maven artifact org.keycloak:keycloak-authz-client Java policy enforcer - Maven artifact org.keycloak:keycloak-policy-enforcer The client libraries are supported with all the supported Red Hat build of Keycloak server versions. The fact that client libraries are supported with more server versions makes the update easier, so you may not need to update the server at the same time when you update client libraries of your application. It is possible that client libraries may work even with the older releases of the Red Hat build of Keycloak server, but it is not guaranteed and officially supported. It may be needed to consult the javadoc of the client libraries like Java admin-client to see what endpoints and parameters are supported with which Red Hat build of Keycloak server version. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/upgrading_guide/upgrading_the_red_hat_build_of_keycloak_client_libraries |
Chapter 50. Handling Exceptions | Chapter 50. Handling Exceptions Abstract When possible, exceptions caught by a resource method should cause a useful error to be returned to the requesting consumer. JAX-RS resource methods can throw a WebApplicationException exception. You can also provide ExceptionMapper<E> implementations to map exceptions to appropriate responses. 50.1. Overview of JAX-RS Exception Classes Overview In JAX-RS 1.x, the only available exception class is WebApplicationException . Since JAX-WS 2.0, however, a number of additional JAX-RS exception classes have been defined. JAX-RS runtime level exceptions The following exceptions are meant to be thrown by the JAX-RS runtime only (that is, you must not throw these exceptions from your application level code): ProcessingException (JAX-RS 2.0 only) The javax.ws.rs.ProcessingException can be thrown during request processing or during response processing in the JAX-RS runtime. For example, this error could be thrown due to errors in the filter chain or interceptor chain processing. ResponseProcessingException (JAX-RS 2.0 only) The javax.ws.rs.client.ResponseProcessingException is a subclass of ProcessingException , which can be thrown when errors occur in the JAX-RS runtime on the client side . JAX-RS application level exceptions The following exceptions are intended to be thrown (and caught) in your application level code: WebApplicationException The javax.ws.rs.WebApplicationException is a generic application level JAX-RS exception, which can be thrown in application code on the server side. This exception type can encapsulate a HTTP status code, an error message, and (optionally) a response message. For details, see Section 50.2, "Using WebApplicationException exceptions to report" . ClientErrorException (JAX-RS 2.0 only) The javax.ws.rs.ClientErrorException exception class inherits from WebApplicationException and is used to encapsulate HTTP 4xx status codes. ServerErrorException (JAX-RS 2.0 only) The javax.ws.rs.ServerErrorException exception class inherits from WebApplicationException and is used to encapsulate HTTP 5xx status codes. RedirectionException (JAX-RS 2.0 only) The javax.ws.rs.RedirectionException exception class inherits from WebApplicationException and is used to encapsulate HTTP 3xx status codes. 50.2. Using WebApplicationException exceptions to report Overview The JAX-RS API introduced the WebApplicationException runtime exception to provide an easy way for resource methods to create exceptions that are appropriate for RESTful clients to consume. WebApplicationException exceptions can include a Response object that defines the entity body to return to the originator of the request. It also provides a mechanism for specifying the HTTP status code to be returned to the client if no entity body is provided. Creating a simple exception The easiest means of creating a WebApplicationException exception is to use either the no argument constructor or the constructor that wraps the original exception in a WebApplicationException exception. Both constructors create a WebApplicationException with an empty response. When an exception created by either of these constructors is thrown, the runtime returns a response with an empty entity body and a status code of 500 Server Error . Setting the status code returned to the client When you want to return an error code other than 500 , you can use one of the four WebApplicationException constructors that allow you to specify the status. Two of these constructors, shown in Example 50.1, "Creating a WebApplicationException with a status code" , take the return status as an integer. Example 50.1. Creating a WebApplicationException with a status code WebApplicationException int status WebApplicationException java.lang.Throwable cause int status The other two, shown in Example 50.2, "Creating a WebApplicationException with a status code" take the response status as an instance of Response.Status . Example 50.2. Creating a WebApplicationException with a status code WebApplicationException javax.ws.rs.core.Response.Status status WebApplicationException java.lang.Throwable cause javax.ws.rs.core.Response.Status status When an exception created by either of these constructors is thrown, the runtime returns a response with an empty entity body and the specified status code. Providing an entity body If you want a message to be sent along with the exception, you can use one of the WebApplicationException constructors that takes a Response object. The runtime uses the Response object to create the response sent to the client. The entity stored in the response is mapped to the entity body of the message and the status field of the response is mapped to the HTTP status of the message. Example 50.3, "Sending a message with an exception" shows code for returning a text message to a client containing the reason for the exception and sets the HTTP message status to 409 Conflict . Example 50.3. Sending a message with an exception Extending the generic exception It is possible to extend the WebApplicationException exception. This would allow you to create custom exceptions and eliminate some boiler plate code. Example 50.4, "Extending WebApplicationException" shows a new exception that creates a similar response to the code in Example 50.3, "Sending a message with an exception" . Example 50.4. Extending WebApplicationException 50.3. JAX-RS 2.0 Exception Types Overview JAX-RS 2.0 introduces a number of specific HTTP exception types that you can throw (and catch) in your application code (in addition to the existing WebApplicationException exception type). These exception types can be used to wrap standard HTTP status codes, either for HTTP client errors (HTTP 4xx status codes), or HTTP server errors (HTTP 5xx status codes). Exception hierarchy Figure 50.1, "JAX-RS 2.0 Application Exception Hierarchy" shows the hierarchy of application level exceptions supported in JAX-RS 2.0. Figure 50.1. JAX-RS 2.0 Application Exception Hierarchy WebApplicationException class The javax.ws.rs.WebApplicationException exception class (which has been available since JAX-RS 1.x) is at the base of the JAX-RS 2.0 exception hierarchy, and is described in detail in Section 50.2, "Using WebApplicationException exceptions to report" . ClientErrorException class The javax.ws.rs.ClientErrorException exception class is used to encapsulate HTTP client errors (HTTP 4xx status codes). In your application code, you can throw this exception or one of its subclasses. ServerErrorException class The javax.ws.rs.ServerErrorException exception class is used to encapsulate HTTP server errors (HTTP 5xx status codes). In your application code, you can throw this exception or one of its subclasses. RedirectionException class The javax.ws.rs.RedirectionException exception class is used to encapsulate HTTP request redirection (HTTP 3xx status codes). The constructors of this class take a URI argument, which specifies the redirect location. The redirect URI is accessible through the getLocation() method. Normally, HTTP redirection is transparent on the client side. Client exception subclasses You can raise the following HTTP client exceptions (HTTP 4xx status codes) in a JAX-RS 2.0 application: BadRequestException Encapsulates the 400 Bad Request HTTP error status. ForbiddenException Encapsulates the 403 Forbidden HTTP error status. NotAcceptableException Encapsulates the 406 Not Acceptable HTTP error status. NotAllowedException Encapsulates the 405 Method Not Allowed HTTP error status. NotAuthorizedException Encapsulates the 401 Unauthorized HTTP error status. This exception could be raised in either of the following cases: The client did not send the required credentials (in a HTTP Authorization header), or The client presented the credentials, but the credentials were not valid. NotFoundException Encapsulates the 404 Not Found HTTP error status. NotSupportedException Encapsulates the 415 Unsupported Media Type HTTP error status. Server exception subclasses You can raise the following HTTP server exceptions (HTTP 5xx status codes) in a JAX-RS 2.0 application: InternalServerErrorException Encapsulates the 500 Internal Server Error HTTP error status. ServiceUnavailableException Encapsulates the 503 Service Unavailable HTTP error status. 50.4. Mapping Exceptions to Responses Overview There are instances where throwing a WebApplicationException exception is impractical or impossible. For example, you may not want to catch all possible exceptions and then create a WebApplicationException for them. You may also want to use custom exceptions that make working with your application code easier. To handle these cases the JAX-RS API allows you to implement a custom exception provider that generates a Response object to send to a client. Custom exception providers are created by implementing the ExceptionMapper<E> interface. When registered with the Apache CXF runtime, the custom provider will be used whenever an exception of type E is thrown. How exception mappers are selected Exception mappers are used in two cases: When any exception or one of its subclasses, is thrown, the runtime will check for an appropriate exception mapper. An exception mapper is selected if it handles the specific exception thrown. If there is not an exception mapper for the specific exception that was thrown, the exception mapper for the nearest superclass of the exception is selected. By default, a WebApplicationException will be handled by the default mapper, WebApplicationExceptionMapper . Even if an additional custom mapper is registered, which could potentially handle a WebApplicationException exception (for example, a custom RuntimeException mapper), the custom mapper will not be used and the WebApplicationExceptionMapper will be used instead. This behaviour can be changed, however, by setting the default.wae.mapper.least.specific property to true on a Message object. When this option is enabled, the default WebApplicationExceptionMapper is relegated to the lowest priority, so that it becomes possible to handle a WebApplicationException exception with a custom exception mapper. For example, if this option is enabled, it would be possible to catch a WebApplicationException exception by registering a custom RuntimeException mapper. See the section called "Registering an exception mapper for WebApplicationException" . If an exception mapper is not found for an exception, the exception is wrapped in an ServletException exception and passed onto the container runtime. The container runtime will then determine how to handle the exception. Implementing an exception mapper Exception mappers are created by implementing the javax.ws.rs.ext.ExceptionMapper<E> interface. As shown in Example 50.5, "Exception mapper interface" , the interface has a single method, toResponse() , that takes the original exception as a parameter and returns a Response object. Example 50.5. Exception mapper interface The Response object created by the exception mapper is processed by the runtime just like any other Response object. The resulting response to the consumer will contain the status, headers, and entity body encapsulated in the Response object. Exception mapper implementations are considered providers by the runtime. Therefore they must be decorated with the @Provider annotation. If an exception occurs while the exception mapper is building the Response object, the runtime will send a response with a status of 500 Server Error to the consumer. Example 50.6, "Mapping an exception to a response" shows an exception mapper that intercepts Spring AccessDeniedException exceptions and generates a response with a 403 Forbidden status and an empty entity body. Example 50.6. Mapping an exception to a response The runtime will catch any AccessDeniedException exceptions and create a Response object with no entity body and a status of 403 . The runtime will then process the Response object as it would for a normal response. The result is that the consumer will receive an HTTP response with a status of 403 . Registering an exception mapper Before a JAX-RS application can use an exception mapper, the exception mapper must be registered with the runtime. Exception mappers are registered with the runtime using the jaxrs:providers element in the application's configuration file. The jaxrs:providers element is a child of the jaxrs:server element and contains a list of bean elements. Each bean element defines one exception mapper. Example 50.7, "Registering exception mappers with the runtime" shows a JAX-RS server configured to use a custom exception mapper, SecurityExceptionMapper . Example 50.7. Registering exception mappers with the runtime Registering an exception mapper for WebApplicationException Registering an exception mapper for a WebApplicationException exception is a special case, because this exception type is automatically handled by the default WebApplicationExceptionMapper . Normally, even when you register a custom mapper that you would expect to handle WebApplicationException , it will continue to be handled by the default WebApplicationExceptionMapper . To change this default behaviour, you need to set the default.wae.mapper.least.specific property to true . For example, the following XML code shows how to enable the default.wae.mapper.least.specific property on a JAX-RS endpoint: You can also set the default.wae.mapper.least.specific property in an interceptor, as shown in the following example: | [
"errors indexterm:[WebApplicationException]",
"import javax.ws.rs.core.Response; import javax.ws.rs.WebApplicationException; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(Response.Status.CONFLICT); builder.entity(\"The requested resource is conflicted.\"); Response response = builder.build(); throw WebApplicationException(response);",
"public class ConflicteddException extends WebApplicationException { public ConflictedException(String message) { ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(Response.Status.CONFLICT); builder.entity(message); super(builder.build()); } } throw ConflictedException(\"The requested resource is conflicted.\");",
"public interface ExceptionMapper<E extends java.lang.Throwable> { public Response toResponse(E exception); }",
"import javax.ws.rs.core.Response; import javax.ws.rs.ext.ExceptionMapper; import org.springframework.security.AccessDeniedException; @Provider public class SecurityExceptionMapper implements ExceptionMapper<AccessDeniedException> { public Response toResponse(AccessDeniedException exception) { return Response.status(Response.Status.FORBIDDEN).build(); } }",
"<beans ...> <jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:providers> <bean id=\"securityException\" class=\"com.bar.providers.SecurityExceptionMapper\"/> </jaxrs:providers> </jaxrs:server> </beans>",
"<beans ...> <jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:providers> <bean id=\"securityException\" class=\"com.bar.providers.SecurityExceptionMapper\"/> </jaxrs:providers> <jaxrs:properties> <entry key=\"default.wae.mapper.least.specific\" value=\"true\"/> </jaxrs:properties> </jaxrs:server> </beans>",
"// Java public void handleMessage(Message message) { m.put(\"default.wae.mapper.least.specific\", true);"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/RESTExceptions |
Chapter 1. Project APIs | Chapter 1. Project APIs 1.1. Project [project.openshift.io/v1] Description Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators. Listing or watching projects will return only projects the user has the reader role on. An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed as editable to end users while namespaces are not. Direct creation of a project is typically restricted to administrators, while end users should use the requestproject resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/project_apis/project-apis |
Visual Studio Code Extension Guide | Visual Studio Code Extension Guide Migration Toolkit for Applications 7.1 Identify and resolve migration issues by analyzing your applications with the Migration Toolkit for Applications extension for Visual Studio Code. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/visual_studio_code_extension_guide/index |
Chapter 5. Configuring Streams for Apache Kafka | Chapter 5. Configuring Streams for Apache Kafka Use the Kafka and ZooKeeper properties files to configure Streams for Apache Kafka. ZooKeeper /kafka/config/zookeeper.properties Kafka /kafka/config/server.properties The properties files are in the Java format, with each property on separate line in the following format: Lines starting with # or ! will be treated as comments and will be ignored by Streams for Apache Kafka components. Values can be split into multiple lines by using \ directly before the newline / carriage return. After you save the changes in the properties files, you need to restart the Kafka broker or ZooKeeper. In a multi-node environment, you will need to repeat the process on each node in the cluster. 5.1. Using standard Kafka configuration properties Use standard Kafka configuration properties to configure Kafka components. The properties provide options to control and tune the configuration of the following Kafka components: Brokers Topics Producer, consumer, and management clients Kafka Connect Kafka Streams Broker and client parameters include options to configure authorization, authentication and encryption. For further information on Kafka configuration properties and how to use the properties to tune your deployment, see the following guides: Kafka configuration properties Kafka configuration tuning 5.2. Loading configuration values from environment variables Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables. You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. The Environment Variables Configuration Provider JAR file. The JAR file is available from the Streams for Apache Kafka archive . Procedure Add the Environment Variables Configuration Provider JAR file to the Kafka libs directory. Initialize the Environment Variables Configuration Provider in the configuration properties file of the Kafka component. For example, to initialize the provider for Kafka, add the configuration to the server.properties file. Configuration to enable the Environment Variables Configuration Provider config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider Add configuration to the properties file to load data from environment variables. Configuration to load data from an environment variable option=USD{env: <MY_ENV_VAR_NAME> } Use capitalized or upper-case environment variable naming conventions, such as MY_ENV_VAR_NAME . Save the changes. Restart the Kafka component. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . 5.3. Configuring ZooKeeper Kafka uses ZooKeeper to store configuration data and for cluster coordination. It is strongly recommended to run a cluster of replicated ZooKeeper instances. 5.3.1. Basic configuration The most important ZooKeeper configuration options are: tickTime ZooKeeper's basic time unit in milliseconds. It is used for heartbeats and session timeouts. For example, minimum session timeout will be two ticks. dataDir The directory where ZooKeeper stores its transaction logs and snapshots of its in-memory database. This should be set to the /var/lib/zookeeper/ directory that was created during installation. clientPort Port number where clients can connect. Defaults to 2181 . An example ZooKeeper configuration file named config/zookeeper.properties is located in the Streams for Apache Kafka installation directory. It is recommended to place the dataDir directory on a separate disk device to minimize the latency in ZooKeeper. ZooKeeper configuration file should be located in ./config/zookeeper.properties . A basic example of the configuration file can be found below. The configuration file has to be readable by the Kafka user. tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 5.3.2. ZooKeeper cluster configuration In most production environments, we recommend you deploy a cluster of replicated ZooKeeper instances. A stable and highly available ZooKeeper cluster is important for running for a reliable ZooKeeper service. ZooKeeper clusters are also referred to as ensembles . ZooKeeper clusters usually consist of an odd number of nodes. ZooKeeper requires that a majority of the nodes in the cluster are up and running. For example: In a cluster with three nodes, at least two of the nodes must be up and running. This means it can tolerate one node being down. In a cluster consisting of five nodes, at least three nodes must be available. This means it can tolerate two nodes being down. In a cluster consisting of seven nodes, at least four nodes must be available. This means it can tolerate three nodes being down. Having more nodes in the ZooKeeper cluster delivers better resiliency and reliability of the whole cluster. ZooKeeper can run in clusters with an even number of nodes. The additional node, however, does not increase the resiliency of the cluster. A cluster with four nodes requires at least three nodes to be available and can tolerate only one node being down. Therefore it has exactly the same resiliency as a cluster with only three nodes. Ideally, the different ZooKeeper nodes should be located in different data centers or network segments. Increasing the number of ZooKeeper nodes increases the workload spent on cluster synchronization. For most Kafka use cases, a ZooKeeper cluster with 3, 5 or 7 nodes should be sufficient. Warning A ZooKeeper cluster with 3 nodes can tolerate only 1 unavailable node. This means that if a cluster node crashes while you are doing maintenance on another node your ZooKeeper cluster will be unavailable. Replicated ZooKeeper configuration supports all configuration options supported by the standalone configuration. Additional options are added for the clustering configuration: initLimit Amount of time to allow followers to connect and sync to the cluster leader. The time is specified as a number of ticks (see the tickTime option for more details). syncLimit Amount of time for which followers can be behind the leader. The time is specified as a number of ticks (see the tickTime option for more details). reconfigEnabled Enables or disables dynamic reconfiguration. Must be enabled in order to add or remove servers to a ZooKeeper cluster. standaloneEnabled Enables or disables standalone mode, where ZooKeeper runs with only one server. In addition to the options above, every configuration file should contain a list of servers which should be members of the ZooKeeper cluster. The server records should be specified in the format server.id=hostname:port1:port2 , where: id The ID of the ZooKeeper cluster node. hostname The hostname or IP address where the node listens for connections. port1 The port number used for intra-cluster communication. port2 The port number used for leader election. The following is an example configuration file of a ZooKeeper cluster with three nodes: tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181 Tip To use four letter word commands, specify 4lw.commands.whitelist=* in zookeeper.properties . myid files Each node in the ZooKeeper cluster must be assigned a unique ID . Each node's ID must be configured in a myid file and stored in the dataDir folder, like /var/lib/zookeeper/ . The myid files should contain only a single line with the written ID as text. The ID can be any integer from 1 to 255. You must manually create this file on each cluster node. Using this file, each ZooKeeper instance will use the configuration from the corresponding server. line in the configuration file to configure its listeners. It will also use all other server. lines to identify other cluster members. In the above example, there are three nodes, so each one will have a different myid with values 1 , 2 , and 3 respectively. 5.3.3. Authentication By default, ZooKeeper does not use any form of authentication and allows anonymous connections. However, it supports Java Authentication and Authorization Service (JAAS) which can be used to set up authentication using Simple Authentication and Security Layer (SASL). ZooKeeper supports authentication using the DIGEST-MD5 SASL mechanism with locally stored credentials. 5.3.3.1. Authentication with SASL JAAS is configured using a separate configuration file. It is recommended to place the JAAS configuration file in the same directory as the ZooKeeper configuration ( ./config/ ). The recommended file name is zookeeper-jaas.conf . When using a ZooKeeper cluster with multiple nodes, the JAAS configuration file has to be created on all cluster nodes. JAAS is configured using contexts. Separate parts such as the server and client are always configured with a separate context . The context is a configuration option and has the following format: SASL Authentication is configured separately for server-to-server communication (communication between ZooKeeper instances) and client-to-server communication (communication between Kafka and ZooKeeper). Server-to-server authentication is relevant only for ZooKeeper clusters with multiple nodes. Server-to-Server authentication For server-to-server authentication, the JAAS configuration file contains two parts: The server configuration The client configuration When using DIGEST-MD5 SASL mechanism, the QuorumServer context is used to configure the authentication server. It must contain all the usernames to be allowed to connect together with their passwords in an unencrypted form. The second context, QuorumLearner , has to be configured for the client which is built into ZooKeeper. It also contains the password in an unencrypted form. An example of the JAAS configuration file for DIGEST-MD5 mechanism can be found below: In addition to the JAAS configuration file, you must enable the server-to-server authentication in the regular ZooKeeper configuration file by specifying the following options: Use the KAFKA_OPTS environment variable to pass the JAAS configuration file to the ZooKeeper server as a Java property: For more information about server-to-server authentication, see ZooKeeper wiki . Client-to-Server authentication Client-to-server authentication is configured in the same JAAS file as the server-to-server authentication. However, unlike the server-to-server authentication, it contains only the server configuration. The client part of the configuration has to be done in the client. For information on how to configure a Kafka broker to connect to ZooKeeper using authentication, see the Kafka installation section. Add the Server context to the JAAS configuration file to configure client-to-server authentication. For DIGEST-MD5 mechanism it configures all usernames and passwords: After configuring the JAAS context, enable the client-to-server authentication in the ZooKeeper configuration file by adding the following line: You must add the authProvider. <ID> property for every server that is part of the ZooKeeper cluster. Use the KAFKA_OPTS environment variable to pass the JAAS configuration file to the ZooKeeper server as a Java property: For more information about configuring ZooKeeper authentication in Kafka brokers, see Section 6.5, "ZooKeeper authentication" . 5.3.3.2. Enabling server-to-server authentication using DIGEST-MD5 This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism between the nodes of the ZooKeeper cluster. Prerequisites Streams for Apache Kafka is installed on the host ZooKeeper cluster is configured with multiple nodes. Enabling SASL DIGEST-MD5 authentication On all ZooKeeper nodes, create or edit the ./config/zookeeper-jaas.conf JAAS configuration file and add the following contexts: The username and password must be the same in both JAAS contexts. For example: On all ZooKeeper nodes, edit the ./config/zookeeper.properties ZooKeeper configuration file and set the following options: quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20 Restart all ZooKeeper nodes one by one. To pass the JAAS configuration to ZooKeeper, use the KAFKA_OPTS environment variable. 5.3.3.3. Enabling Client-to-server authentication using DIGEST-MD5 This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism between ZooKeeper clients and ZooKeeper. Prerequisites Streams for Apache Kafka is installed on the host ZooKeeper cluster is configured and running . Enabling SASL DIGEST-MD5 authentication On all ZooKeeper nodes, create or edit the ./config/zookeeper-jaas.conf JAAS configuration file and add the following context: The super automatically has administrator priviledges. The file can contain multiple users, but only one additional user is required by the Kafka brokers. The recommended name for the Kafka user is kafka . The following example shows the Server context for client-to-server authentication: On all ZooKeeper nodes, edit the ./config/zookeeper.properties ZooKeeper configuration file and set the following options: requireClientAuthScheme=sasl authProvider. <IdOfBroker1> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker2> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker3> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider The authProvider. <ID> property has to be added for every node which is part of the ZooKeeper cluster. An example three-node ZooKeeper cluster configuration must look like the following: requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider Restart all ZooKeeper nodes one by one. To pass the JAAS configuration to ZooKeeper, use the KAFKA_OPTS environment variable. 5.3.4. Authorization ZooKeeper supports access control lists (ACLs) to protect data stored inside it. Kafka brokers can automatically configure the ACL rights for all ZooKeeper records they create so no other ZooKeeper user can modify them. For more information about enabling ZooKeeper ACLs in Kafka brokers, see Section 6.6, "ZooKeeper authorization" . 5.3.5. TLS ZooKeeper supports TLS for encryption or authentication. 5.3.6. Additional configuration options You can set the following additional ZooKeeper configuration options based on your use case: maxClientCnxns The maximum number of concurrent client connections to a single member of the ZooKeeper cluster. autopurge.snapRetainCount Number of snapshots of ZooKeeper's in-memory database which will be retained. Default value is 3 . autopurge.purgeInterval The time interval in hours for purging snapshots. The default value is 0 and this option is disabled. All available configuration options can be found in the ZooKeeper documentation . 5.4. Configuring Kafka Kafka uses a properties file to store static configuration. The recommended location for the configuration file is ./config/server.properties . The configuration file has to be readable by the Kafka user. Streams for Apache Kafka ships an example configuration file that highlights various basic and advanced features of the product. It can be found under config/server.properties in the Streams for Apache Kafka installation directory. This chapter explains the most important configuration options. 5.4.1. ZooKeeper Kafka brokers need ZooKeeper to store some parts of their configuration as well as to coordinate the cluster (for example to decide which node is a leader for which partition). Connection details for the ZooKeeper cluster are stored in the configuration file. The field zookeeper.connect contains a comma-separated list of hostnames and ports of members of the zookeeper cluster. For example: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 Kafka will use these addresses to connect to the ZooKeeper cluster. With this configuration, all Kafka znodes will be created directly in the root of ZooKeeper database. Therefore, such a ZooKeeper cluster could be used only for a single Kafka cluster. To configure multiple Kafka clusters to use single ZooKeeper cluster, specify a base (prefix) path at the end of the ZooKeeper connection string in the Kafka configuration file: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1 5.4.2. Listeners Listeners are used to connect to Kafka brokers. Each Kafka broker can be configured to use multiple listeners. Each listener requires a different configuration so it can listen on a different port or network interface. To configure listeners, edit the listeners property in the Kafka configuration properties file. Add listeners to the listeners property as a comma-separated list. Configure each property as follows: If <hostname> is empty, Kafka uses the java.net.InetAddress.getCanonicalHostName() class as the hostname. Example configuration for multiple listeners listeners=internal-1://:9092,internal-2://:9093,replication://:9094 When a Kafka client wants to connect to a Kafka cluster, it first connects to the bootstrap server , which is one of the cluster nodes. The bootstrap server provides the client with a list of all the brokers in the cluster, and the client connects to each one individually. The list of brokers is based on the configured listeners . Advertised listeners Optionally, you can use the advertised.listeners property to provide the client with a different set of listener addresses than those given in the listeners property. This is useful if additional network infrastructure, such as a proxy, is between the client and the broker, or an external DNS name is being used instead of an IP address. The advertised.listeners property is formatted in the same way as the listeners property. Example configuration for advertised listeners listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235 Note The names of the advertised listeners must match those listed in the listeners property. Inter-broker listeners Inter-broker listeners are used for communication between Kafka brokers. Inter-broker communication is required for: Coordinating workloads between different brokers Replicating messages between partitions stored on different brokers The inter-broker listener can be assigned to a port of your choice. When multiple listeners are configured, you can define the name of the inter-broker listener in the inter.broker.listener.name property of your broker configuration. Here, the inter-broker listener is named as REPLICATION : listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION Controller listeners Controller configuration is used to connect and communicate with the controller that coordinates the cluster and manages the metadata used to track the status of brokers and partitions. By default, communication between the controllers and brokers uses a dedicated controller listener. Controllers are responsible for coordinating administrative tasks, such as partition leadership changes, so one or more of these listeners is required. Specify listeners to use for controllers using the controller.listener.names property. You can specify a dynamic quorum of controllers using the controller.quorum.bootstrap.servers property. The quorum enables a leader-follower structure for administrative tasks, with the leader actively managing operations and followers as hot standbys, ensuring metadata consistency in memory and facilitating failover. listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.bootstrap.servers=localhost:9090 The format for the controller quorum is <hostname>:<port> . 5.4.3. Data logs Apache Kafka stores all records it receives from producers in logs. The logs contain the actual data, in the form of records, that Kafka needs to deliver. Note that these records differ from application log files, which detail the broker's activities. Log directories You can configure log directories using the log.dirs property in the server configuration properties file to store logs in one or multiple log directories. It should be set to /var/lib/kafka directory created during installation: Data log configuration For performance reasons, you can configure log.dirs to multiple directories and place each of them on a different physical device to improve disk I/O performance. For example: Configuration for multiple directories 5.4.4. Broker ID Broker ID is a unique identifier for each broker in the cluster. You can assign an integer greater than or equal to 0 as broker ID. The broker ID is used to identify the brokers after restarts or crashes and it is therefore important that the id is stable and does not change over time. The broker ID is configured in the broker properties file: | [
"<option> = <value>",
"This is a comment",
"sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";",
"config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider",
"option=USD{env: <MY_ENV_VAR_NAME> }",
"tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181",
"tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181",
"ContextName { param1 param2; };",
"QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_zookeeper=\"123456\"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\"zookeeper\" password=\"123456\"; };",
"quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/zookeeper-jaas.conf\"; ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties",
"Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\"123456\" user_kafka=\"123456\" user_someoneelse=\"123456\"; };",
"requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/zookeeper-jaas.conf\"; ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties",
"QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_ <Username> =\" <Password> \"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\" <Username> \" password=\" <Password> \"; };",
"QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_zookeeper=\"123456\"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\"zookeeper\" password=\"123456\"; };",
"quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/zookeeper-jaas.conf\"; ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties",
"Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\" <SuperUserPassword> \" user <Username1>_=\" <Password1> \" user <USername2>_=\" <Password2> \"; };",
"Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\"123456\" user_kafka=\"123456\"; };",
"requireClientAuthScheme=sasl authProvider. <IdOfBroker1> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker2> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker3> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider",
"requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider",
"export KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/zookeeper-jaas.conf\"; ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties",
"zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181",
"zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1",
"<listener_name>://<hostname>:<port>",
"listeners=internal-1://:9092,internal-2://:9093,replication://:9094",
"listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235",
"listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION",
"listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.bootstrap.servers=localhost:9090",
"log.dirs=/var/lib/kafka",
"log.dirs=/var/lib/kafka1,/var/lib/kafka2,/var/lib/kafka3",
"broker.id=1"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-configuring-amq-streams-str |
Preface | Preface The Red Hat Virtualization Manager includes a data warehouse that collects monitoring data about hosts, virtual machines, and storage. Data Warehouse, which includes a database and a service, must be installed and configured along with the Manager setup, either on the same machine or on a separate server. The Red Hat Virtualization installation creates two databases: The Manager database ( engine ) is the primary data store used by the Red Hat Virtualization Manager. Information about the virtualization environment like its state, configuration, and performance are stored in this database. The Data Warehouse database ( ovirt_engine_history ) contains configuration information and statistical data which is collated over time from the Manager database. The configuration data in the Manager database is examined every minute, and changes are replicated to the Data Warehouse database. Tracking the changes to the database provides information on the objects in the database. This enables you to analyze and enhance the performance of your Red Hat Virtualization environment and resolve difficulties. To calculate an estimate of the space and resources the ovirt_engine_history database will use, use the RHV Manager History Database Size Calculator tool. The estimate is based on the number of entities and the length of time you have chosen to retain the history records. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/pr01 |
Chapter 120. KafkaMirrorMakerConsumerSpec schema reference | Chapter 120. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 120.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 120.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 120.3. config Use the consumer.config properties to configure Kafka options for the consumer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Properties with the following prefixes cannot be set: bootstrap.servers group.id interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 120.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 120.5. KafkaMirrorMakerConsumerSpec schema properties Property Property type Description numStreams integer Specifies the number of consumer stream threads to create. offsetCommitInterval integer Specifies the offset auto-commit interval in ms. Default value is 60000. bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. groupId string A unique string that identifies the consumer group this consumer belongs to. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). tls ClientTls TLS configuration for connecting MirrorMaker to the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerConsumerSpec-reference |
Chapter 10. IdM integration with Red Hat products | Chapter 10. IdM integration with Red Hat products Find documentation for other Red Hat products that integrate with IdM. You can configure these products to allow your IdM users to access their services. Ansible Automation Platform OpenShift Container Platform Red Hat OpenStack Platform Red Hat Satellite Red Hat Single Sign-On Red Hat Virtualization | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/ref_idm-integration-with-other-red-hat-products_planning-identity-management |
About | About Red Hat Advanced Cluster Security for Kubernetes 4.7 Welcome to Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/about/index |
Chapter 1. Hosted control planes overview | Chapter 1. Hosted control planes overview You can deploy OpenShift Container Platform clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OpenShift Container Platform, you create control planes as pods on a hosting cluster without the need for dedicated virtual or physical machines for each control plane. 1.1. Glossary of common concepts and personas for hosted control planes When you use hosted control planes for OpenShift Container Platform, it is important to understand its key concepts and the personas that are involved. 1.1.1. Concepts hosted cluster An OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. hosted cluster infrastructure Network, compute, and storage resources that exist in the tenant or end-user cloud account. hosted control plane An OpenShift Container Platform control plane that runs on the management cluster, which is exposed by the API endpoint of a hosted cluster. The components of a control plane include etcd, the Kubernetes API server, the Kubernetes controller manager, and a VPN. hosting cluster See management cluster . managed cluster A cluster that the hub cluster manages. This term is specific to the cluster lifecycle that the multicluster engine for Kubernetes Operator manages in Red Hat Advanced Cluster Management. A managed cluster is not the same thing as a management cluster . For more information, see Managed cluster . management cluster An OpenShift Container Platform cluster where the HyperShift Operator is deployed and where the control planes for hosted clusters are hosted. The management cluster is synonymous with the hosting cluster . management cluster infrastructure Network, compute, and storage resources of the management cluster. node pool A resource that contains the compute nodes. The control plane contains node pools. The compute nodes run applications and workloads. 1.1.2. Personas cluster instance administrator Users who assume this role are the equivalent of administrators in standalone OpenShift Container Platform. This user has the cluster-admin role in the provisioned cluster, but might not have power over when or how the cluster is updated or configured. This user might have read-only access to see some configuration projected into the cluster. cluster instance user Users who assume this role are the equivalent of developers in standalone OpenShift Container Platform. This user does not have a view into OperatorHub or machines. cluster service consumer Users who assume this role can request control planes and worker nodes, drive updates, or modify externalized configurations. Typically, this user does not manage or access cloud credentials or infrastructure encryption keys. The cluster service consumer persona can request hosted clusters and interact with node pools. Users who assume this role have RBAC to create, read, update, or delete hosted clusters and node pools within a logical boundary. cluster service provider Users who assume this role typically have the cluster-admin role on the management cluster and have RBAC to monitor and own the availability of the HyperShift Operator as well as the control planes for the tenant's hosted clusters. The cluster service provider persona is responsible for several activities, including the following examples: Owning service-level objects for control plane availability, uptime, and stability Configuring the cloud account for the management cluster to host control planes Configuring the user-provisioned infrastructure, which includes the host awareness of available compute resources 1.2. Introduction to hosted control planes You can use hosted control planes for Red Hat OpenShift Container Platform to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications. Hosted control planes is available by using the multicluster engine for Kubernetes Operator version 2.0 or later on the following platforms: Bare metal by using the Agent provider OpenShift Virtualization, as a Generally Available feature in connected environments and a Technology Preview feature in disconnected environments Amazon Web Services (AWS), as a Technology Preview feature IBM Z, as a Technology Preview feature IBM Power, as a Technology Preview feature 1.2.1. Architecture of hosted control planes OpenShift Container Platform is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run. The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster's control plane, machine management APIs, and other components that contribute to the state of a cluster. Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload. 1.2.2. Benefits of hosted control planes With hosted control planes for OpenShift Container Platform, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits. The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure. With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable. Because the control planes consist of pods that are launched on OpenShift Container Platform, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling. From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant's cloud provider account, isolating usage to the tenant. From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity. 1.3. Differences between hosted control planes and OpenShift Container Platform Hosted control planes is a form factor of OpenShift Container Platform. Hosted clusters and the stand-alone OpenShift Container Platform clusters are configured and managed differently. See the following tables to understand the differences between OpenShift Container Platform and hosted control planes: 1.3.1. Cluster creation and lifecycle OpenShift Container Platform Hosted control planes You install a standalone OpenShift Container Platform cluster by using the openshift-install binary or the Assisted Installer. You install a hosted cluster by using the hypershift.openshift.io API resources such as HostedCluster and NodePool , on an existing OpenShift Container Platform cluster. 1.3.2. Cluster configuration OpenShift Container Platform Hosted control planes You configure cluster-scoped resources such as authentication, API server, and proxy by using the config.openshift.io API group. You configure resources that impact the control plane in the HostedCluster resource. 1.3.3. etcd encryption OpenShift Container Platform Hosted control planes You configure etcd encryption by using the APIServer resource with AES-GCM or AES-CBC. For more information, see "Enabling etcd encryption". You configure etcd encryption by using the HostedCluster resource in the SecretEncryption field with AES-CBC or KMS for Amazon Web Services. 1.3.4. Operators and control plane OpenShift Container Platform Hosted control planes A standalone OpenShift Container Platform cluster contains separate Operators for each control plane component. A hosted cluster contains a single Operator named Control Plane Operator that runs in the hosted control plane namespace on the management cluster. etcd uses storage that is mounted on the control plane nodes. The etcd cluster Operator manages etcd. etcd uses a persistent volume claim for storage and is managed by the Control Plane Operator. The Ingress Operator, network related Operators, and {olm-first} run on the cluster. The Ingress Operator, network related Operators, and {olm-first} run in the hosted control plane namespace on the management cluster. The OAuth server runs inside the cluster and is exposed through a route in the cluster. The OAuth server runs inside the control plane and is exposed through a route, node port, or load balancer on the management cluster. 1.3.5. Updates OpenShift Container Platform Hosted control planes The Cluster Version Operator (CVO) orchestrates the update process and monitors the ClusterVersion resource. Administrators and OpenShift components can interact with the CVO through the ClusterVersion resource. The oc adm upgrade command results in a change to the ClusterVersion.Spec.DesiredUpdate field in the ClusterVersion resource. The hosted control planes update results in a change to the .spec.release.image field in the HostedCluster and NodePools resources. Any changes to the ClusterVersion resource are ignored. After you update an OpenShift Container Platform cluster, both the control plane and compute machines are updated. After you update the hosted cluster, only the control plane is updated. You perform node pool updates separately. 1.3.6. Machine configuration and management OpenShift Container Platform Hosted control planes The MachineSets resource manages machines in the openshift-machine-api namespace. The NodePool resource manages machines on the management cluster. A set of control plane machines are available. A set of control plane machines do not exist. You enable a machine health check by using the MachineHealthCheck resource. You enable a machine health check through the .spec.management.autoRepair field in the NodePool resource. You enable autoscaling by using the ClusterAutoscaler and MachineAutoscaler resources. You enable autoscaling through the spec.autoScaling field in the NodePool resource. Machines and machine sets are exposed in the cluster. Machines, machine sets, and machine deployments from upstream Cluster CAPI Operator are used to manage machines but are not exposed to the user. All machine sets are upgraded automatically when you update the cluster. You update your node pools independently from the hosted cluster updates. Only an in-place upgrade is supported in the cluster. Both replace and in-place upgrades are supported in the hosted cluster. The Machine Config Operator manages configurations for machines. The Machine Config Operator does not exist in hosted control planes. You configure machine Ignition by using the MachineConfig , KubeletConfig , and ContainerRuntimeConfig resources that are selected from a MachineConfigPool selector. You configure the MachineConfig , KubeletConfig , and ContainerRuntimeConfig resources through the config map referenced in the spec.config field of the NodePool resource. The Machine Config Daemon (MCD) manages configuration changes and updates on each of the nodes. For an in-place upgrade, the node pool controller creates a run-once pod that updates a machine based on your configuration. You can modify the machine configuration resources such as the SR-IOV Operator. You cannot modify the machine configuration resources. 1.3.7. Networking OpenShift Container Platform Hosted control planes The Kube API server communicates with nodes directly, because the Kube API server and nodes exist in the same Virtual Private Cloud (VPC). The Kube API server communicates with nodes through Konnectivity. The Kube API server and nodes exist in a different Virtual Private Cloud (VPC). Nodes communicate with the Kube API server through the internal load balancer. Nodes communicate with the Kube API server through an external load balancer or a node port. 1.3.8. Web console OpenShift Container Platform Hosted control planes The web console shows the status of a control plane. The web console does not show the status of a control plane. You can update your cluster by using the web console. You cannot update the hosted cluster by using the web console. The web console displays the infrastructure resources such as machines. The web console does not display the infrastructure resources. You can configure machines through the MachineConfig resource by using the web console. You cannot configure machines by using the web console. Additional resources Enabling etcd encryption 1.4. Relationship between hosted control planes, multicluster engine Operator, and RHACM You can configure hosted control planes by using the multicluster engine for Kubernetes Operator. The multicluster engine is an integral part of Red Hat Advanced Cluster Management (RHACM) and is enabled by default with RHACM. The multicluster engine Operator cluster lifecycle defines the process of creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers. The multicluster engine Operator is the cluster lifecycle Operator that provides cluster management capabilities for OpenShift Container Platform and RHACM hub clusters. The multicluster engine Operator enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. Figure 1.1. Cluster life cycle and foundation You can use the multicluster engine Operator with OpenShift Container Platform as a standalone cluster manager or as part of a RHACM hub cluster. Tip A management cluster is also known as the hosting cluster. You can deploy OpenShift Container Platform clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OpenShift Container Platform, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane. Figure 1.2. RHACM and the multicluster engine Operator introduction diagram 1.5. Versioning for hosted control planes With each major, minor, or patch version release of OpenShift Container Platform, two components of hosted control planes are released: The HyperShift Operator The hcp command-line interface (CLI) The HyperShift Operator manages the lifecycle of hosted clusters that are represented by the HostedCluster API resources. The HyperShift Operator is released with each OpenShift Container Platform release. The HyperShift Operator creates the supported-versions config map in the hypershift namespace. The config map contains the supported hosted cluster versions. You can host different versions of control planes on the same management cluster. Example supported-versions config map object apiVersion: v1 data: supported-versions: '{"versions":["4.15"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift You can use the hcp CLI to create hosted clusters. You can use the hypershift.openshift.io API resources, such as, HostedCluster and NodePool , to create and manage OpenShift Container Platform clusters at scale. A HostedCluster resource contains the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes that is attached to a HostedCluster resource. The API version policy generally aligns with the policy for Kubernetes API versioning . Additional resources Configuring node tuning in a hosted cluster Advanced node tuning for hosted clusters by setting kernel boot parameters | [
"apiVersion: v1 data: supported-versions: '{\"versions\":[\"4.15\"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/hosted_control_planes/hosted-control-planes-overview |
Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1] | Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 13.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 13.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 13.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 13.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 13.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 13.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 13.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectRulesReview Table 13.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 13.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/selfsubjectrulesreview-authorization-k8s-io-v1 |
Chapter 13. Configuring a high-availability cluster by using RHEL system roles | Chapter 13. Configuring a high-availability cluster by using RHEL system roles With the ha_cluster system role, you can configure and manage a high-availability cluster that uses the Pacemaker high availability cluster resource manager. 13.1. Variables of the ha_cluster RHEL system role In an ha_cluster RHEL system role playbook, you define the variables for a high availability cluster according to the requirements of your cluster deployment. The variables you can set for an ha_cluster RHEL system role are as follows: ha_cluster_enable_repos A boolean flag that enables the repositories containing the packages that are needed by the ha_cluster RHEL system role. When this variable is set to true , the default value, you must have active subscription coverage for RHEL and the RHEL High Availability Add-On on the systems that you will use as your cluster members or the system role will fail. ha_cluster_enable_repos_resilient_storage (RHEL 8.10 and later) A boolean flag that enables the repositories containing resilient storage packages, such as dlm or gfs2 . For this option to take effect, ha_cluster_enable_repos must be set to true . The default value of this variable is false . ha_cluster_manage_firewall (RHEL 8.8 and later) A boolean flag that determines whether the ha_cluster RHEL system role manages the firewall. When ha_cluster_manage_firewall is set to true , the firewall high availability service and the fence-virt port are enabled. When ha_cluster_manage_firewall is set to false , the ha_cluster RHEL system role does not manage the firewall. If your system is running the firewalld service, you must set the parameter to true in your playbook. You can use the ha_cluster_manage_firewall parameter to add ports, but you cannot use the parameter to remove ports. To remove ports, use the firewall system role directly. As of RHEL 8.8, the firewall is no longer configured by default, because it is configured only when ha_cluster_manage_firewall is set to true . ha_cluster_manage_selinux (RHEL 8.8 and later) A boolean flag that determines whether the ha_cluster RHEL system role manages the ports belonging to the firewall high availability service using the selinux RHEL system role. When ha_cluster_manage_selinux is set to true , the ports belonging to the firewall high availability service are associated with the SELinux port type cluster_port_t . When ha_cluster_manage_selinux is set to false , the ha_cluster RHEL system role does not manage SELinux. If your system is running the selinux service, you must set this parameter to true in your playbook. Firewall configuration is a prerequisite for managing SELinux. If the firewall is not installed, the managing SELinux policy is skipped. You can use the ha_cluster_manage_selinux parameter to add policy, but you cannot use the parameter to remove policy. To remove policy, use the selinux RHEL system role directly. ha_cluster_cluster_present A boolean flag which, if set to true , determines that HA cluster will be configured on the hosts according to the variables passed to the role. Any cluster configuration not specified in the playbook and not supported by the role will be lost. If ha_cluster_cluster_present is set to false , all HA cluster configuration will be removed from the target hosts. The default value of this variable is true . The following example playbook removes all cluster configuration on node1 and node2 ha_cluster_start_on_boot A boolean flag that determines whether cluster services will be configured to start on boot. The default value of this variable is true . ha_cluster_fence_agent_packages List of fence agent packages to install. The default value of this variable is fence-agents-all , fence-virt . ha_cluster_extra_packages List of additional packages to be installed. The default value of this variable is no packages. This variable can be used to install additional packages not installed automatically by the role, for example custom resource agents. It is possible to specify fence agents as members of this list. However, ha_cluster_fence_agent_packages is the recommended role variable to use for specifying fence agents, so that its default value is overridden. ha_cluster_hacluster_password A string value that specifies the password of the hacluster user. The hacluster user has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault . There is no default password value, and this variable must be specified. ha_cluster_hacluster_qdevice_password (RHEL 8.9 and later) A string value that specifies the password of the hacluster user for a quorum device. This parameter is needed only if the ha_cluster_quorum parameter is configured to use a quorum device of type net and the password of the hacluster user on the quorum device is different from the password of the hacluster user specified with the ha_cluster_hacluster_password parameter. The hacluster user has full access to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content with Ansible Vault . There is no default value for this password. ha_cluster_corosync_key_src The path to Corosync authkey file, which is the authentication and encryption key for Corosync communication. It is highly recommended that you have a unique authkey value for each cluster. The key should be 256 bytes of random data. If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault . If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If this variable is set, ha_cluster_regenerate_keys is ignored for this key. The default value of this variable is null. ha_cluster_pacemaker_key_src The path to the Pacemaker authkey file, which is the authentication and encryption key for Pacemaker communication. It is highly recommended that you have a unique authkey value for each cluster. The key should be 256 bytes of random data. If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault . If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If this variable is set, ha_cluster_regenerate_keys is ignored for this key. The default value of this variable is null. ha_cluster_fence_virt_key_src The path to the fence-virt or fence-xvm pre-shared key file, which is the location of the authentication key for the fence-virt or fence-xvm fence agent. If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault . If no key is specified, a key already present on the nodes will be used. If nodes do not have the same key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no node has a key, a new key will be generated and distributed to the nodes. If the ha_cluster RHEL system role generates a new key in this fashion, you should copy the key to your nodes' hypervisor to ensure that fencing works. If this variable is set, ha_cluster_regenerate_keys is ignored for this key. The default value of this variable is null. ha_cluster_pcsd_public_key_srcr , ha_cluster_pcsd_private_key_src The path to the pcsd TLS certificate and private key. If this is not specified, a certificate-key pair already present on the nodes will be used. If a certificate-key pair is not present, a random new one will be generated. If you specify a private key value for this variable, it is recommended that you vault encrypt the key, as described in Encrypting content with Ansible Vault . If these variables are set, ha_cluster_regenerate_keys is ignored for this certificate-key pair. The default value of these variables is null. ha_cluster_pcsd_certificates (RHEL 8.8 and later) Creates a pcsd private key and certificate using the certificate RHEL system role. If your system is not configured with a pcsd private key and certificate, you can create them in one of two ways: Set the ha_cluster_pcsd_certificates variable. When you set the ha_cluster_pcsd_certificates variable, the certificate RHEL system role is used internally and it creates the private key and certificate for pcsd as defined. Do not set the ha_cluster_pcsd_public_key_src , ha_cluster_pcsd_private_key_src , or the ha_cluster_pcsd_certificates variables. If you do not set any of these variables, the ha_cluster RHEL system role will create pcsd certificates by means of pcsd itself. The value of ha_cluster_pcsd_certificates is set to the value of the variable certificate_requests as specified in the certificate RHEL system role. For more information about the certificate RHEL system role, see Requesting certificates using RHEL system roles . The following operational considerations apply to the use of the ha_cluster_pcsd_certificate variable: Unless you are using IPA and joining the systems to an IPA domain, the certificate RHEL system role creates self-signed certificates. In this case, you must explicitly configure trust settings outside of the context of RHEL system roles. System roles do not support configuring trust settings. When you set the ha_cluster_pcsd_certificates variable, do not set the ha_cluster_pcsd_public_key_src and ha_cluster_pcsd_private_key_src variables. When you set the ha_cluster_pcsd_certificates variable, ha_cluster_regenerate_keys is ignored for this certificate - key pair. The default value of this variable is [] . For an example ha_cluster RHEL system role playbook that creates TLS certificates and key files in a high availability cluster, see Creating pcsd TLS certificates and key files for a high availability cluster . ha_cluster_regenerate_keys A boolean flag which, when set to true , determines that pre-shared keys and TLS certificates will be regenerated. For more information about when keys and certificates will be regenerated, see the descriptions of the ha_cluster_corosync_key_src , ha_cluster_pacemaker_key_src , ha_cluster_fence_virt_key_src , ha_cluster_pcsd_public_key_src , and ha_cluster_pcsd_private_key_src variables. The default value of this variable is false . ha_cluster_pcs_permission_list Configures permissions to manage a cluster using pcsd . The items you configure with this variable are as follows: type - user or group name - user or group name allow_list - Allowed actions for the specified user or group: read - View cluster status and settings write - Modify cluster settings except permissions and ACLs grant - Modify cluster permissions and ACLs full - Unrestricted access to a cluster including adding and removing nodes and access to keys and certificates The structure of the ha_cluster_pcs_permission_list variable and its default values are as follows: ha_cluster_cluster_name The name of the cluster. This is a string value with a default of my-cluster . ha_cluster_transport (RHEL 8.7 and later) Sets the cluster transport method. The items you configure with this variable are as follows: type (optional) - Transport type: knet , udp , or udpu . The udp and udpu transport types support only one link. Encryption is always disabled for udp and udpu . Defaults to knet if not specified. options (optional) - List of name-value dictionaries with transport options. links (optional) - List of list of name-value dictionaries. Each list of name-value dictionaries holds options for one Corosync link. It is recommended that you set the linknumber value for each link. Otherwise, the first list of dictionaries is assigned by default to the first link, the second one to the second link, and so on. compression (optional) - List of name-value dictionaries configuring transport compression. Supported only with the knet transport type. crypto (optional) - List of name-value dictionaries configuring transport encryption. By default, encryption is enabled. Supported only with the knet transport type. For a list of allowed options, see the pcs -h cluster setup help page or the setup description in the cluster section of the pcs (8) man page. For more detailed descriptions, see the corosync.conf (5) man page. The structure of the ha_cluster_transport variable is as follows: For an example ha_cluster RHEL system role playbook that configures a transport method, see Configuring Corosync values in a high availability cluster . ha_cluster_totem (RHEL 8.7 and later) Configures Corosync totem. For a list of allowed options, see the pcs -h cluster setup help page or the setup description in the cluster section of the pcs (8) man page. For a more detailed description, see the corosync.conf (5) man page. The structure of the ha_cluster_totem variable is as follows: For an example ha_cluster RHEL system role playbook that configures a Corosync totem, see Configuring Corosync values in a high availability cluster . ha_cluster_quorum (RHEL 8.7 and later) Configures cluster quorum. You can configure the following items for cluster quorum: options (optional) - List of name-value dictionaries configuring quorum. Allowed options are: auto_tie_breaker , last_man_standing , last_man_standing_window , and wait_for_all . For information about quorum options, see the votequorum (5) man page. device (optional) - (RHEL 8.8 and later) Configures the cluster to use a quorum device. By default, no quorum device is used. model (mandatory) - Specifies a quorum device model. Only net is supported model_options (optional) - List of name-value dictionaries configuring the specified quorum device model. For model net , you must specify host and algorithm options. Use the pcs-address option to set a custom pcsd address and port to connect to the qnetd host. If you do not specify this option, the role connects to the default pcsd port on the host . generic_options (optional) - List of name-value dictionaries setting quorum device options that are not model specific. heuristics_options (optional) - List of name-value dictionaries configuring quorum device heuristics. For information about quorum device options, see the corosync-qdevice (8) man page. The generic options are sync_timeout and timeout . For model net options see the quorum.device.net section. For heuristics options, see the quorum.device.heuristics section. To regenerate a quorum device TLS certificate, set the ha_cluster_regenerate_keys variable to true . The structure of the ha_cluster_quorum variable is as follows: For an example ha_cluster RHEL system role playbook that configures cluster quorum, see Configuring Corosync values in a high availability cluster . For an example ha_cluster RHEL system role playbook that configures a cluster using a quorum device, see Configuring a high availability cluster using a quorum device . ha_cluster_sbd_enabled (RHEL 8.7 and later) A boolean flag which determines whether the cluster can use the SBD node fencing mechanism. The default value of this variable is false . For an example ha_cluster system role playbook that enables SBD, see Configuring a high availability cluster with SBD node fencing . ha_cluster_sbd_options (RHEL 8.7 and later) List of name-value dictionaries specifying SBD options. For information about these options, see the Configuration via environment section of the sbd (8) man page. Supported options are: delay-start - defaults to false , documented as SBD_DELAY_START startmode - defaults to always , documented as SBD_START_MODE timeout-action - defaults to flush,reboot , documented as SBD_TIMEOUT_ACTION watchdog-timeout - defaults to 5 , documented as SBD_WATCHDOG_TIMEOUT For an example ha_cluster system role playbook that configures SBD options, see Configuring a high availability cluster with SBD node fencing . When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. For information about configuring watchdog and SBD devices in an inventory file, see Specifying an inventory for the ha_cluster system role . ha_cluster_cluster_properties List of sets of cluster properties for Pacemaker cluster-wide configuration. Only one set of cluster properties is supported. The structure of a set of cluster properties is as follows: By default, no properties are set. The following example playbook configures a cluster consisting of node1 and node2 and sets the stonith-enabled and no-quorum-policy cluster properties. ha_cluster_node_options (RHEL 8. 10 and later) This variable defines settings which vary from one cluster node to another. It sets the options for the specified nodes, but does not specify which nodes form the cluster. You specify which nodes form the cluster with the hosts parameter in an inventory or a playbook. The items you configure with this variable are as follows: node_name (mandatory) - Name of the node for which to define Pacemaker node attributes. It must match a name defined for a node. attributes (optional) - List of sets of Pacemaker node attributes for the node. Currently, only one set is supported. The first set is used and the rest are ignored. The structure of the ha_cluster_node_options variable is as follows: By default, no node options are defined. For an example ha_cluster RHEL system role playbook that includes node options configuration, see Configuring a high availability cluster with node attributes . ha_cluster_resource_primitives This variable defines pacemaker resources configured by the RHEL system role, including fencing resources. You can configure the following items for each resource: id (mandatory) - ID of a resource. agent (mandatory) - Name of a resource or fencing agent, for example ocf:pacemaker:Dummy or stonith:fence_xvm . It is mandatory to specify stonith: for STONITH agents. For resource agents, it is possible to use a short name, such as Dummy , instead of ocf:pacemaker:Dummy . However, if several agents with the same short name are installed, the role will fail as it will be unable to decide which agent should be used. Therefore, it is recommended that you use full names when specifying a resource agent. instance_attrs (optional) - List of sets of the resource's instance attributes. Currently, only one set is supported. The exact names and values of attributes, as well as whether they are mandatory or not, depend on the resource or fencing agent. meta_attrs (optional) - List of sets of the resource's meta attributes. Currently, only one set is supported. copy_operations_from_agent (optional) - (RHEL 8.9 and later) Resource agents usually define default settings for resource operations, such as interval and timeout , optimized for the specific agent. If this variable is set to true , then those settings are copied to the resource configuration. Otherwise, clusterwide defaults apply to the resource. If you also define resource operation defaults for the resource with the ha_cluster_resource_operation_defaults role variable, you can set this to false . The default value of this variable is true . operations (optional) - List of the resource's operations. action (mandatory) - Operation action as defined by pacemaker and the resource or fencing agent. attrs (mandatory) - Operation options, at least one option must be specified. The structure of the resource definition that you configure with the ha_cluster RHEL system role is as follows: By default, no resources are defined. For an example ha_cluster RHEL system role playbook that includes resource configuration, see Configuring a high availability cluster with fencing and resources . ha_cluster_resource_groups This variable defines pacemaker resource groups configured by the system role. You can configure the following items for each resource group: id (mandatory) - ID of a group. resources (mandatory) - List of the group's resources. Each resource is referenced by its ID and the resources must be defined in the ha_cluster_resource_primitives variable. At least one resource must be listed. meta_attrs (optional) - List of sets of the group's meta attributes. Currently, only one set is supported. The structure of the resource group definition that you configure with the ha_cluster RHEL system role is as follows: By default, no resource groups are defined. For an example ha_cluster RHEL system role playbook that includes resource group configuration, see Configuring a high availability cluster with fencing and resources . ha_cluster_resource_clones This variable defines pacemaker resource clones configured by the system role. You can configure the following items for a resource clone: resource_id (mandatory) - Resource to be cloned. The resource must be defined in the ha_cluster_resource_primitives variable or the ha_cluster_resource_groups variable. promotable (optional) - Indicates whether the resource clone to be created is a promotable clone, indicated as true or false . id (optional) - Custom ID of the clone. If no ID is specified, it will be generated. A warning will be displayed if this option is not supported by the cluster. meta_attrs (optional) - List of sets of the clone's meta attributes. Currently, only one set is supported. The structure of the resource clone definition that you configure with the ha_cluster RHEL system role is as follows: By default, no resource clones are defined. For an example ha_cluster RHEL system role playbook that includes resource clone configuration, see Configuring a high availability cluster with fencing and resources . ha_cluster_resource_defaults (RHEL 8.9 and later) This variable defines sets of resource defaults. You can define multiple sets of defaults and apply them to resources of specific agents using rules. The defaults you specify with the ha_cluster_resource_defaults variable do not apply to resources which override them with their own defined values. Only meta attributes can be specified as defaults. You can configure the following items for each defaults set: id (optional) - ID of the defaults set. If not specified, it is autogenerated. rule (optional) - Rule written using pcs syntax defining when and for which resources the set applies. For information on specifying a rule, see the resource defaults set create section of the pcs (8) man page. score (optional) - Weight of the defaults set. attrs (optional) - Meta attributes applied to resources as defaults. The structure of the ha_cluster_resource_defaults variable is as follows: For an example ha_cluster RHEL system role playbook that configures resource defaults, see Configuring a high availability cluster with resource and resource operation defaults . ha_cluster_resource_operation_defaults (RHEL 8.9 and later) This variable defines sets of resource operation defaults. You can define multiple sets of defaults and apply them to resources of specific agents and specific resource operations using rules. The defaults you specify with the ha_cluster_resource_operation_defaults variable do not apply to resource operations which override them with their own defined values. By default, the ha_cluster RHEL system role configures resources to define their own values for resource operations. For information about overriding these defaults with the ha_cluster_resource_operations_defaults variable, see the description of the copy_operations_from_agent item in ha_cluster_resource_primitives . Only meta attributes can be specified as defaults. The structure of the ha_cluster_resource_operations_defaults variable is the same as the structure for the ha_cluster_resource_defaults variable, with the exception of how you specify a rule. For information about specifying a rule to describe the resource operation to which a set applies, see the resource op defaults set create section of the pcs (8) man page. ha_cluster_stonith_levels (RHEL 8.10 and later) This variable defines STONITH levels, also known as fencing topology. Fencing levels configure a cluster to use multiple devices to fence nodes. You can define alternative devices in case one device fails and you can require multiple devices to all be executed successfully to consider a node successfully fenced. For more information on fencing levels, see Configuring fencing levels in Configuring and managing high availability clusters . You can configure the following items when defining fencing levels: level (mandatory) - Order in which to attempt the fencing level. Pacemaker attempts levels in ascending order until one succeeds. target (optional) - Name of a node this level applies to. You must specify one of the following three selections: target_pattern - POSIX extended regular expression matching the names of the nodes this level applies to. target_attribute - Name of a node attribute that is set for the node this level applies to. target_attribute and target_value - Name and value of a node attribute that is set for the node this level applies to. resouce_ids (mandatory) - List of fencing resources that must all be tried for this level. By default, no fencing levels are defined. The structure of the fencing levels definition that you configure with the ha_cluster RHEL system role is as follows: For an example ha_cluster RHEL system role playbook that configures fencing defaults, see Configuring a high availability cluster with fencing levels . ha_cluster_constraints_location This variable defines resource location constraints. Resource location constraints indicate which nodes a resource can run on. You can specify a resources specified by a resource ID or by a pattern, which can match more than one resource. You can specify a node by a node name or by a rule. You can configure the following items for a resource location constraint: resource (mandatory) - Specification of a resource the constraint applies to. node (mandatory) - Name of a node the resource should prefer or avoid. id (optional) - ID of the constraint. If not specified, it will be autogenerated. options (optional) - List of name-value dictionaries. score - Sets the weight of the constraint. A positive score value means the resource prefers running on the node. A negative score value means the resource should avoid running on the node. A score value of -INFINITY means the resource must avoid running on the node. If score is not specified, the score value defaults to INFINITY . By default no resource location constraints are defined. The structure of a resource location constraint specifying a resource ID and node name is as follows: The items that you configure for a resource location constraint that specifies a resource pattern are the same items that you configure for a resource location constraint that specifies a resource ID, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows: pattern (mandatory) - POSIX extended regular expression resource IDs are matched against. The structure of a resource location constraint specifying a resource pattern and node name is as follows: You can configure the following items for a resource location constraint that specifies a resource ID and a rule: resource (mandatory) - Specification of a resource the constraint applies to. id (mandatory) - Resource ID. role (optional) - The resource role to which the constraint is limited: Started , Unpromoted , Promoted . rule (mandatory) - Constraint rule written using pcs syntax. For further information, see the constraint location section of the pcs (8) man page. Other items to specify have the same meaning as for a resource constraint that does not specify a rule. The structure of a resource location constraint that specifies a resource ID and a rule is as follows: The items that you configure for a resource location constraint that specifies a resource pattern and a rule are the same items that you configure for a resource location constraint that specifies a resource ID and a rule, with the exception of the resource specification itself. The item that you specify for the resource specification is as follows: pattern (mandatory) - POSIX extended regular expression resource IDs are matched against. The structure of a resource location constraint that specifies a resource pattern and a rule is as follows: For an example ha_cluster RHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints . ha_cluster_constraints_colocation This variable defines resource colocation constraints. Resource colocation constraints indicate that the location of one resource depends on the location of another one. There are two types of colocation constraints: a simple colocation constraint for two resources, and a set colocation constraint for multiple resources. You can configure the following items for a simple resource colocation constraint: resource_follower (mandatory) - A resource that should be located relative to resource_leader . id (mandatory) - Resource ID. role (optional) - The resource role to which the constraint is limited: Started , Unpromoted , Promoted . resource_leader (mandatory) - The cluster will decide where to put this resource first and then decide where to put resource_follower . id (mandatory) - Resource ID. role (optional) - The resource role to which the constraint is limited: Started , Unpromoted , Promoted . id (optional) - ID of the constraint. If not specified, it will be autogenerated. options (optional) - List of name-value dictionaries. score - Sets the weight of the constraint. Positive score values indicate the resources should run on the same node. Negative score values indicate the resources should run on different nodes. A score value of +INFINITY indicates the resources must run on the same node. A score value of -INFINITY indicates the resources must run on different nodes. If score is not specified, the score value defaults to INFINITY . By default no resource colocation constraints are defined. The structure of a simple resource colocation constraint is as follows: You can configure the following items for a resource set colocation constraint: resource_sets (mandatory) - List of resource sets. resource_ids (mandatory) - List of resources in a set. options (optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint. id (optional) - Same values as for a simple colocation constraint. options (optional) - Same values as for a simple colocation constraint. The structure of a resource set colocation constraint is as follows: For an example ha_cluster RHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints . ha_cluster_constraints_order This variable defines resource order constraints. Resource order constraints indicate the order in which certain resource actions should occur. There are two types of resource order constraints: a simple order constraint for two resources, and a set order constraint for multiple resources. You can configure the following items for a simple resource order constraint: resource_first (mandatory) - Resource that the resource_then resource depends on. id (mandatory) - Resource ID. action (optional) - The action that must complete before an action can be initiated for the resource_then resource. Allowed values: start , stop , promote , demote . resource_then (mandatory) - The dependent resource. id (mandatory) - Resource ID. action (optional) - The action that the resource can execute only after the action on the resource_first resource has completed. Allowed values: start , stop , promote , demote . id (optional) - ID of the constraint. If not specified, it will be autogenerated. options (optional) - List of name-value dictionaries. By default no resource order constraints are defined. The structure of a simple resource order constraint is as follows: You can configure the following items for a resource set order constraint: resource_sets (mandatory) - List of resource sets. resource_ids (mandatory) - List of resources in a set. options (optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint. id (optional) - Same values as for a simple order constraint. options (optional) - Same values as for a simple order constraint. The structure of a resource set order constraint is as follows: For an example ha_cluster RHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints . ha_cluster_constraints_ticket This variable defines resource ticket constraints. Resource ticket constraints indicate the resources that depend on a certain ticket. There are two types of resource ticket constraints: a simple ticket constraint for one resource, and a ticket order constraint for multiple resources. You can configure the following items for a simple resource ticket constraint: resource (mandatory) - Specification of a resource the constraint applies to. id (mandatory) - Resource ID. role (optional) - The resource role to which the constraint is limited: Started , Unpromoted , Promoted . ticket (mandatory) - Name of a ticket the resource depends on. id (optional) - ID of the constraint. If not specified, it will be autogenerated. options (optional) - List of name-value dictionaries. loss-policy (optional) - Action to perform on the resource if the ticket is revoked. By default no resource ticket constraints are defined. The structure of a simple resource ticket constraint is as follows: You can configure the following items for a resource set ticket constraint: resource_sets (mandatory) - List of resource sets. resource_ids (mandatory) - List of resources in a set. options (optional) - List of name-value dictionaries fine-tuning how resources in the sets are treated by the constraint. ticket (mandatory) - Same value as for a simple ticket constraint. id (optional) - Same value as for a simple ticket constraint. options (optional) - Same values as for a simple ticket constraint. The structure of a resource set ticket constraint is as follows: For an example ha_cluster RHEL system role playbook that creates a cluster with resource constraints, see Configuring a high availability cluster with resource constraints . ha_cluster_qnetd (RHEL 8.8 and later) This variable configures a qnetd host which can then serve as an external quorum device for clusters. You can configure the following items for a qnetd host: present (optional) - If true , configure a qnetd instance on the host. If false , remove qnetd configuration from the host. The default value is false . If you set this true , you must set ha_cluster_cluster_present to false . start_on_boot (optional) - Configures whether the qnetd instance should start automatically on boot. The default value is true . regenerate_keys (optional) - Set this variable to true to regenerate the qnetd TLS certificate. If you regenerate the certificate, you must either re-run the role for each cluster to connect it to the qnetd host again or run pcs manually. You cannot run qnetd on a cluster node because fencing would disrupt qnetd operation. For an example ha_cluster RHEL system role playbook that configures a cluster using a quorum device, see Configuring a cluster using a quorum device . Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.2. Specifying an inventory for the ha_cluster RHEL system role When configuring an HA cluster using the ha_cluster RHEL system role playbook, you configure the names and addresses of the nodes for the cluster in an inventory. 13.2.1. Configuring node names and addresses in an inventory For each node in an inventory, you can optionally specify the following items: node_name - the name of a node in a cluster. pcs_address - an address used by pcs to communicate with the node. It can be a name, FQDN or an IP address and it can include a port number. corosync_addresses - list of addresses used by Corosync. All nodes which form a particular cluster must have the same number of addresses. The order of the addresses must be the same for all nodes, so that the addresses belonging to a particular link are specified in the same position for all nodes. The following example shows an inventory with targets node1 and node2 . node1 and node2 must be either fully qualified domain names or must otherwise be able to connect to the nodes as when, for example, the names are resolvable through the /etc/hosts file. Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.2.2. Configuring watchdog and SBD devices in an inventory (RHEL 8.7 and later) When using SBD, you can optionally configure watchdog and SBD devices for each node in an inventory. Even though all SBD devices must be shared to and accessible from all nodes, each node can use different names for the devices. Watchdog devices can be different for each node as well. For information about the SBD variables you can set in a system role playbook, see the entries for ha_cluster_sbd_enabled and ha_cluster_sbd_options in Variables of the ha_cluster RHEL system role . For each node in an inventory, you can optionally specify the following items: sbd_watchdog_modules (optional) - (RHEL 8.9 and later) Watchdog kernel modules to be loaded, which create /dev/watchdog* devices. Defaults to empty list if not set. sbd_watchdog_modules_blocklist (optional) - (RHEL 8.9 and later) Watchdog kernel modules to be unloaded and blocked. Defaults to empty list if not set. sbd_watchdog - Watchdog device to be used by SBD. Defaults to /dev/watchdog if not set. sbd_devices - Devices to use for exchanging SBD messages and for monitoring. Defaults to empty list if not set. Always refer to the devices using the long, stable device name (/dev/disk/by-id/). The following example shows an inventory that configures watchdog and SBD devices for targets node1 and node2 . For an example procedure that creates high availability cluster that uses SBD fencing, see Configuring a high availability cluster with SBD node fencing . Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.3. Creating pcsd TLS certificates and key files for a high availability cluster (RHEL 8.8 and later) The connection between cluster nodes is secured using Transport Layer Security (TLS) encryption. By default, the pcsd daemon generates self-signed certificates. For many deployments, however, you may want to replace the default certificates with certificates issued by a certificate authority of your company and apply your company certificate policies for pcsd . You can use the ha_cluster RHEL system role to create TLS certificates and key files in a high availability cluster. When you run this playbook, the ha_cluster RHEL system role uses the certificate RHEL system role internally to manage TLS certificates. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create TLS certificates and key files in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_pcsd_certificates: - name: FILENAME common_name: "{{ ansible_hostname }}" ca: self-sign The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_pcsd_certificates: <certificate_properties> A variable that creates a self-signed pcsd certificate and private key files in /var/lib/pcsd . In this example, the pcsd certificate has the file name FILENAME.crt and the key file is named FILENAME.key . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory Requesting certificates using RHEL system roles 13.4. Configuring a high availability cluster running no resources You can use the ha_cluster system role to configure a basic cluster in a simple, automatic way. Once you have created a basic cluster, you can use the pcs command-line interface to configure the other cluster components and behaviors on a resource-by-resource basis. The following example procedure configures a basic two-node cluster with no fencing configured using the minimum required parameters. Warning The ha_cluster system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with minimum required parameters and no fencing ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.5. Configuring a high availability cluster with fencing and resources The specific components of a cluster configuration depend on your individual needs, which vary between sites. The following example procedure shows the formats for configuring different cluster components by using the ha_cluster RHEL system role. The configured cluster includes a fencing device, cluster resources, resource groups, and a cloned resource. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resources ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: simple-resource agent: 'ocf:pacemaker:Dummy' - id: resource-with-options agent: 'ocf:pacemaker:Dummy' instance_attrs: - attrs: - name: fake value: fake-value - name: passwd value: passwd-value meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' operations: - action: start attrs: - name: timeout value: '30s' - action: monitor attrs: - name: timeout value: '5' - name: interval value: '1min' - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: simple-clone agent: 'ocf:pacemaker:Dummy' - id: clone-with-options agent: 'ocf:pacemaker:Dummy' ha_cluster_resource_groups: - id: simple-group resource_ids: - dummy-1 - dummy-2 meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' - id: cloned-group resource_ids: - dummy-3 ha_cluster_resource_clones: - resource_id: simple-clone - resource_id: clone-with-options promotable: yes id: custom-clone-id meta_attrs: - attrs: - name: clone-max value: '2' - name: clone-node-max value: '1' - resource_id: cloned-group promotable: yes The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_resource_primitives: <cluster_resources> A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing ha_cluster_resource_groups: <resource_groups> A list of resource group definitions configured by the ha_cluster RHEL system role. ha_cluster_resource_clones: <resource_clones> A list of resource clone definitions configured by the ha_cluster RHEL system role. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory Configuring fencing in a Red Hat High Availability cluster 13.6. Configuring a high availability cluster with resource and resource operation defaults (RHEL 8.9 and later) In your cluster configuration, you can change the Pacemaker default values of a resource option for all resources. You can also change the default value for all resource operations in the cluster. For information about changing the default value of a resource option, see Changing the default value of a resource option . For information about global resource operation defaults, see Configuring global resource operation defaults . The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines resource and resource operation defaults. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resource operation defaults ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # Set a different resource-stickiness value during # and outside work hours. This allows resources to # automatically move back to their most # preferred hosts, but at a time that # does not interfere with business activities. ha_cluster_resource_defaults: meta_attrs: - id: core-hours rule: date-spec hours=9-16 weekdays=1-5 score: 2 attrs: - name: resource-stickiness value: INFINITY - id: after-hours score: 1 attrs: - name: resource-stickiness value: 0 # Default the timeout on all 10-second-interval # monitor actions on IPaddr2 resources to 8 seconds. ha_cluster_resource_operation_defaults: meta_attrs: - rule: resource ::IPaddr2 and op monitor interval=10s score: INFINITY attrs: - name: timeout value: 8s The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_resource_defaults: <resource_defaults> A variable that defines sets of resource defaults. ha_cluster_resource_operation_defaults: <resource_operation_defaults> A variable that defines sets of resource operation defaults. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.7. Configuring a high availability cluster with fencing levels (RHEL 8.10 and later) When you configure multiple fencing devices for a node, you need to define fencing levels for those devices to determine the order that Pacemaker will use the devices to attempt to fence a node. For information about fencing levels, see Configuring fencing levels . The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that defines fencing levels. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml . This example playbook file configures a cluster running the firewalld and selinux services. --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Configure a cluster that defines fencing levels ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: apc1 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc1.example.com - name: username value: user - name: password value: "{{ fence1_password }}" - name: pcmk_host_map value: node1:1;node2:2 - id: apc2 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc2.example.com - name: username value: user - name: password value: "{{ fence2_password }}" - name: pcmk_host_map value: node1:1;node2:2 # Nodes have redundant power supplies, apc1 and apc2. Cluster must # ensure that when attempting to reboot a node, both power # supplies # are turned off before either power supply is turned # back on. ha_cluster_stonith_levels: - level: 1 target: node1 resource_ids: - apc1 - apc2 - level: 1 target: node2 resource_ids: - apc1 - apc2 The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_resource_primitives: <cluster_resources> A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing ha_cluster_stonith_levels: <stonith_levels> A variable that defines STONITH levels, also known as fencing topology, which configure a cluster to use multiple devices to fence nodes. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.8. Configuring a high availability cluster with resource constraints When configuring a cluster, you can specify the behavior of the cluster resources to be in line with your application requirements. You can control the behavior of cluster resources by configuring resource constraints. You can define the following categories of resource constraints: Location constraints, which determine which nodes a resource can run on. For information about location constraints, see Determining which nodes a resource can run on . Ordering constraints, which determine the order in which the resources are run. For information about ordering constraints, see Determing the order in which cluster resources are run . Colocation constraints, which specify that the location of one resource depends on the location of another resource. For information about colocation constraints, see Colocating cluster resources . Ticket constraints, which indicate the resources that depend on a particular Booth ticket. For information about Booth ticket constraints, see Multi-site Pacemaker clusters . The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that includes resource location constraints, resource colocation constraints, resource order constraints, and resource ticket constraints. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with resource constraints ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # In order to use constraints, we need resources # the constraints will apply to. ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: dummy-4 agent: 'ocf:pacemaker:Dummy' - id: dummy-5 agent: 'ocf:pacemaker:Dummy' - id: dummy-6 agent: 'ocf:pacemaker:Dummy' # location constraints ha_cluster_constraints_location: # resource ID and node name - resource: id: dummy-1 node: node1 options: - name: score value: 20 # resource pattern and node name - resource: pattern: dummy-\d+ node: node1 options: - name: score value: 10 # resource ID and rule - resource: id: dummy-2 rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28' # resource pattern and rule - resource: pattern: dummy-\d+ rule: node-type eq weekend and date-spec weekdays=6-7 # colocation constraints ha_cluster_constraints_colocation: # simple constraint - resource_leader: id: dummy-3 resource_follower: id: dummy-4 options: - name: score value: -5 # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 - resource_ids: - dummy-5 - dummy-6 options: - name: sequential value: "false" options: - name: score value: 20 # order constraints ha_cluster_constraints_order: # simple constraint - resource_first: id: dummy-1 resource_then: id: dummy-6 options: - name: symmetrical value: "false" # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 options: - name: require-all value: "false" - name: sequential value: "false" - resource_ids: - dummy-3 - resource_ids: - dummy-4 - dummy-5 options: - name: sequential value: "false" # ticket constraints ha_cluster_constraints_ticket: # simple constraint - resource: id: dummy-1 ticket: ticket1 options: - name: loss-policy value: stop # set constraint - resource_sets: - resource_ids: - dummy-3 - dummy-4 - dummy-5 ticket: ticket2 options: - name: loss-policy value: fence The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_resource_primitives: <cluster_resources> A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing ha_cluster_constraints_location: <location_constraints> A variable that defines resource location constraints. ha_cluster_constraints_colocation: <colocation_constraints> A variable that defines resource colocation constraints. ha_cluster_constraints_order: <order_constraints> A variable that defines resource order constraints. ha_cluster_constraints_ticket: <ticket_constraints> A variable that defines Booth ticket constraints. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.9. Configuring Corosync values in a high availability cluster (RHEL 8.7 and later) The corosync.conf file provides the cluster parameters used by Corosync, the cluster membership and messaging layer that Pacemaker is built on. For your system configuration, you can change some of the default parameters in the corosync.conf file. In general, you should not edit the corosync.conf file directly. You can, however, configure Corosync values by using the ha_cluster RHEL system role. The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures Corosync values. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that configures Corosync values ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_transport: type: knet options: - name: ip_version value: ipv4-6 - name: link_mode value: active links: - - name: linknumber value: 1 - name: link_priority value: 5 - - name: linknumber value: 0 - name: link_priority value: 10 compression: - name: level value: 5 - name: model value: zlib crypto: - name: cipher value: none - name: hash value: none ha_cluster_totem: options: - name: block_unlisted_ips value: 'yes' - name: send_join value: 0 ha_cluster_quorum: options: - name: auto_tie_breaker value: 1 - name: wait_for_all value: 1 The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_transport: <transport_method> A variable that sets the cluster transport method. ha_cluster_totem: <totem_options> A variable that configures Corosync totem options. ha_cluster_quorum: <quorum_options> A variable that configures cluster quorum options. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.10. Configuring a high availability cluster with SBD node fencing (RHEL 8.7 and later) The following procedure uses the ha_cluster RHEL system role to create a high availability cluster that uses SBD node fencing. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. This playbook uses an inventory file that loads a watchdog module (supported in RHEL 8.9 and later) as described in Configuring watchdog and SBD devices in an inventory . Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster that uses SBD node fencing hosts: node1 node2 roles: - rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: <password> ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_sbd_enabled: yes ha_cluster_sbd_options: - name: delay-start value: 'no' - name: startmode value: always - name: timeout-action value: 'flush,reboot' - name: watchdog-timeout value: 30 # Suggested optimal values for SBD timeouts: # watchdog-timeout * 2 = msgwait-timeout (set automatically) # msgwait-timeout * 1.2 = stonith-timeout ha_cluster_cluster_properties: - attrs: - name: stonith-timeout value: 72 ha_cluster_resource_primitives: - id: fence_sbd agent: 'stonith:fence_sbd' instance_attrs: - attrs: # taken from host_vars - name: devices value: "{{ ha_cluster.sbd_devices | join(',') }}" - name: pcmk_delay_base value: 30 This example playbook file configures a cluster running the firewalld and selinux services that uses SBD fencing and creates the SBD Stonith resource. When creating your playbook file for production, vault encrypt the password, as described in Encrypting content with Ansible Vault . Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.11. Configuring a high availability cluster using a quorum device (RHEL 8.8 and later) Your cluster can sustain more node failures than standard quorum rules permit when you configure a separate quorum device. The quorum device acts as a lightweight arbitration device for the cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation. For information about quorum devices, see Configuring quorum devices . To configure a high availability cluster with a separate quorum device by using the ha_cluster RHEL system role, first set up the quorum device. After setting up the quorum device, you can use the device in any number of clusters. 13.11.1. Configuring a quorum device To configure a quorum device using the ha_cluster RHEL system role, follow the steps in this example procedure. Note that you cannot run a quorum device on a cluster node. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The system that you will use to run the quorum device has active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the quorum devices as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook-qdevice.yml , with the following content: --- - name: Configure a host with a quorum device hosts: nodeQ vars_files: - vault.yml tasks: - name: Create a quorum device for the cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_present: false ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_qnetd: present: true The settings specified in the example playbook include the following: ha_cluster_cluster_present: false A variable that, if set to false , determines that all cluster configuration will be removed from the target host. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_qnetd: <quorum_device_options> A variable that configures a qnetd host. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.11.2. Configuring a cluster to use a quorum device To configure a cluster to use a quorum device, follow the steps in this example procedure. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . You have configured a quorum device. Procedure Create a playbook file, for example ~/playbook-cluster-qdevice.yml , with the following content: --- - name: Configure a cluster to use a quorum device hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that uses a quorum device ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_quorum: device: model: net model_options: - name: host value: nodeQ - name: algorithm value: lms The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_quorum: <quorum_parameters> A variable that configures cluster quorum which you can use to specify that the cluster uses a quorum device. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory 13.12. Configuring a high availability cluster with node attributes (RHEL 8.10 and later) You can use Pacemaker rules to make your configuration more dynamic. For example, you can use a node attribute to assign machines to different processing groups based on time and then use that attribute when creating location constraints. Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. For information on node attributes, see Determining resource location with rules . The following example procedure uses the ha_cluster RHEL system role to create a high availability cluster that configures node attributes. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create a cluster that defines node attributes ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1A - name: attribute2 value: value2A - node_name: node2 attributes: - attrs: - name: attribute1 value: value1B - name: attribute2 value: value2B ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_node_options: <node_settings> A variable that defines various settings that vary from one cluster node to another. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources Pacemaker rules 13.13. Configuring an Apache HTTP server in a high availability cluster with the ha_cluster RHEL system role High availability clusters provide highly available services by eliminating single points of failure and by failing over services from one cluster node to another in case a node becomes inoperative. Red Hat provides a variety of documentation for planning, configuring, and maintaining a Red Hat high availability cluster. For a listing of articles that provide indexes to the various areas of Red Hat cluster documentation, see the Red Hat High Availability Add-On Documentation Guide . The following example use case configures an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster by using the ha_cluster RHEL system role. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption. This example uses an APC power switch with a host name of zapc.example.com . If the cluster does not use any other fence agents, you can optionally list only the fence agents your cluster requires when defining the ha_cluster_fence_agent_packages variable, as in this example. Warning The ha_cluster RHEL system role replaces any existing cluster configuration on the specified nodes. Any settings not specified in the playbook will be lost. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The systems that you will use as your cluster members have active subscription coverage for RHEL and the RHEL High Availability Add-On. The inventory file specifies the cluster nodes as described in Specifying an inventory for the ha_cluster RHEL system role . For general information about creating an inventory file, see Preparing a control node on RHEL 8 . You have configured an LVM logical volume with an XFS file system, as described in Configuring an LVM volume with an XFS file system in a Pacemaker cluster . You have configured an Apache HTTP server, as described in Configuring an Apache HTTP Server . Your system includes an APC power switch that will be used to fence the cluster nodes. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: cluster_password: <cluster_password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Create a high availability cluster hosts: z1.example.com z2.example.com vars_files: - vault.yml tasks: - name: Configure active/passive Apache server in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_hacluster_password: "{{ cluster_password }}" ha_cluster_cluster_name: my_cluster ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_fence_agent_packages: - fence-agents-apc-snmp ha_cluster_resource_primitives: - id: myapc agent: stonith:fence_apc_snmp instance_attrs: - attrs: - name: ipaddr value: zapc.example.com - name: pcmk_host_map value: z1.example.com:1;z2.example.com:2 - name: login value: apc - name: passwd value: apc - id: my_lvm agent: ocf:heartbeat:LVM-activate instance_attrs: - attrs: - name: vgname value: my_vg - name: vg_access_mode value: system_id - id: my_fs agent: Filesystem instance_attrs: - attrs: - name: device value: /dev/my_vg/my_lv - name: directory value: /var/www - name: fstype value: xfs - id: VirtualIP agent: IPaddr2 instance_attrs: - attrs: - name: ip value: 198.51.100.3 - name: cidr_netmask value: 24 - id: Website agent: apache instance_attrs: - attrs: - name: configfile value: /etc/httpd/conf/httpd.conf - name: statusurl value: http://127.0.0.1/server-status ha_cluster_resource_groups: - id: apachegroup resource_ids: - my_lvm - my_fs - VirtualIP - Website The settings specified in the example playbook include the following: ha_cluster_cluster_name: <cluster_name> The name of the cluster you are creating. ha_cluster_hacluster_password: <password> The password of the hacluster user. The hacluster user has full access to a cluster. ha_cluster_manage_firewall: true A variable that determines whether the ha_cluster RHEL system role manages the firewall. ha_cluster_manage_selinux: true A variable that determines whether the ha_cluster RHEL system role manages the ports of the firewall high availability service using the selinux RHEL system role. ha_cluster_fence_agent_packages: <fence_agent_packages> A list of fence agent packages to install. ha_cluster_resource_primitives: <cluster_resources> A list of resource definitions for the Pacemaker resources configured by the ha_cluster RHEL system role, including fencing ha_cluster_resource_groups: <resource_groups> A list of resource group definitions configured by the ha_cluster RHEL system role. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: When you use the apache resource agent to manage Apache, it does not use systemd . Because of this, you must edit the logrotate script supplied with Apache so that it does not use systemctl to reload Apache. Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster. # /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true For RHEL 8.6 and later, replace the line you removed with the following three lines, specifying /var/run/httpd- website .pid as the PID file path where website is the name of the Apache resource. In this example, the Apache resource name is Website . /usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /var/run/httpd-Website.pid" -k graceful > /dev/null 2>/dev/null || true For RHEL 8.5 and earlier, replace the line you removed with the following three lines. /usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c "PidFile /run/httpd.pid" -k graceful > /dev/null 2>/dev/null || true Verification From one of the nodes in the cluster, check the status of the cluster. Note that all four resources are running on the same node, z1.example.com . If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello". To test whether the resource group running on z1.example.com fails over to node z2.example.com , put node z1.example.com in standby mode, after which the node will no longer be able to host resources. After putting node z1 in standby mode, check the cluster status from one of the nodes in the cluster. Note that the resources should now all be running on z2 . The web site at the defined IP address should still display, without interruption. To remove z1 from standby mode, enter the following command. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information about the resource-stickiness meta attribute, see Configuring a resource to prefer its current node . Additional resources /usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file /usr/share/doc/rhel-system-roles/ha_cluster/ directory | [
"- hosts: node1 node2 vars: ha_cluster_cluster_present: false roles: - rhel-system-roles.ha_cluster",
"ha_cluster_pcs_permission_list: - type: group name: hacluster allow_list: - grant - read - write",
"ha_cluster_transport: type: knet options: - name: option1_name value: option1_value - name: option2_name value: option2_value links: - - name: option1_name value: option1_value - name: option2_name value: option2_value - - name: option1_name value: option1_value - name: option2_name value: option2_value compression: - name: option1_name value: option1_value - name: option2_name value: option2_value crypto: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_totem: options: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_quorum: options: - name: option1_name value: option1_value - name: option2_name value: option2_value device: model: string model_options: - name: option1_name value: option1_value - name: option2_name value: option2_value generic_options: - name: option1_name value: option1_value - name: option2_name value: option2_value heuristics_options: - name: option1_name value: option1_value - name: option2_name value: option2_value",
"ha_cluster_cluster_properties: - attrs: - name: property1_name value: property1_value - name: property2_name value: property2_value",
"- hosts: node1 node2 vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: password ha_cluster_cluster_properties: - attrs: - name: stonith-enabled value: 'true' - name: no-quorum-policy value: stop roles: - rhel-system-roles.ha_cluster",
"ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1_node1 - name: attribute2 value: value2_node1 - node_name: node2 attributes: - attrs: - name: attribute1 value: value1_node2 - name: attribute2 value: value2_node2",
"- id: resource-id agent: resource-agent instance_attrs: - attrs: - name: attribute1_name value: attribute1_value - name: attribute2_name value: attribute2_value meta_attrs: - attrs: - name: meta_attribute1_name value: meta_attribute1_value - name: meta_attribute2_name value: meta_attribute2_value copy_operations_from_agent: bool operations: - action: operation1-action attrs: - name: operation1_attribute1_name value: operation1_attribute1_value - name: operation1_attribute2_name value: operation1_attribute2_value - action: operation2-action attrs: - name: operation2_attribute1_name value: operation2_attribute1_value - name: operation2_attribute2_name value: operation2_attribute2_value",
"ha_cluster_resource_groups: - id: group-id resource_ids: - resource1-id - resource2-id meta_attrs: - attrs: - name: group_meta_attribute1_name value: group_meta_attribute1_value - name: group_meta_attribute2_name value: group_meta_attribute2_value",
"ha_cluster_resource_clones: - resource_id: resource-to-be-cloned promotable: true id: custom-clone-id meta_attrs: - attrs: - name: clone_meta_attribute1_name value: clone_meta_attribute1_value - name: clone_meta_attribute2_name value: clone_meta_attribute2_value",
"ha_cluster_resource_defaults: meta_attrs: - id: defaults-set-1-id rule: rule-string score: score-value attrs: - name: meta_attribute1_name value: meta_attribute1_value - name: meta_attribute2_name value: meta_attribute2_value - id: defaults-set-2-id rule: rule-string score: score-value attrs: - name: meta_attribute3_name value: meta_attribute3_value - name: meta_attribute4_name value: meta_attribute4_value",
"ha_cluster_stonith_levels: - level: 1..9 target: node_name target_pattern: node_name_regular_expression target_attribute: node_attribute_name target_value: node_attribute_value resource_ids: - fence_device_1 - fence_device_2 - level: 1..9 target: node_name target_pattern: node_name_regular_expression target_attribute: node_attribute_name target_value: node_attribute_value resource_ids: - fence_device_1 - fence_device_2",
"ha_cluster_constraints_location: - resource: id: resource-id node: node-name id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_location: - resource: pattern: resource-pattern node: node-name id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_location: - resource: id: resource-id role: resource-role rule: rule-string id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_location: - resource: pattern: resource-pattern role: resource-role rule: rule-string id: constraint-id options: - name: score value: score-value - name: resource-discovery value: resource-discovery-value",
"ha_cluster_constraints_colocation: - resource_follower: id: resource-id1 role: resource-role1 resource_leader: id: resource-id2 role: resource-role2 id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_colocation: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_order: - resource_first: id: resource-id1 action: resource-action1 resource_then: id: resource-id2 action: resource-action2 id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_order: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value id: constraint-id options: - name: score value: score-value - name: option-name value: option-value",
"ha_cluster_constraints_ticket: - resource: id: resource-id role: resource-role ticket: ticket-name id: constraint-id options: - name: loss-policy value: loss-policy-value - name: option-name value: option-value",
"ha_cluster_constraints_ticket: - resource_sets: - resource_ids: - resource-id1 - resource-id2 options: - name: option-name value: option-value ticket: ticket-name id: constraint-id options: - name: option-name value: option-value",
"all: hosts: node1: ha_cluster: node_name: node-A pcs_address: node1-address corosync_addresses: - 192.168.1.11 - 192.168.2.11 node2: ha_cluster: node_name: node-B pcs_address: node2-address:2224 corosync_addresses: - 192.168.1.12 - 192.168.2.12",
"all: hosts: node1: ha_cluster: sbd_watchdog_modules: - module1 - module2 sbd_watchdog: /dev/watchdog2 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000001 - /dev/disk/by-id/000003 node2: ha_cluster: sbd_watchdog_modules: - module1 sbd_watchdog_modules_blocklist: - module2 sbd_watchdog: /dev/watchdog1 sbd_devices: - /dev/disk/by-id/000001 - /dev/disk/by-id/000002 - /dev/disk/by-id/000003",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create TLS certificates and key files in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_pcsd_certificates: - name: FILENAME common_name: \"{{ ansible_hostname }}\" ca: self-sign",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with minimum required parameters and no fencing ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resources ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: simple-resource agent: 'ocf:pacemaker:Dummy' - id: resource-with-options agent: 'ocf:pacemaker:Dummy' instance_attrs: - attrs: - name: fake value: fake-value - name: passwd value: passwd-value meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' operations: - action: start attrs: - name: timeout value: '30s' - action: monitor attrs: - name: timeout value: '5' - name: interval value: '1min' - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: simple-clone agent: 'ocf:pacemaker:Dummy' - id: clone-with-options agent: 'ocf:pacemaker:Dummy' ha_cluster_resource_groups: - id: simple-group resource_ids: - dummy-1 - dummy-2 meta_attrs: - attrs: - name: target-role value: Started - name: is-managed value: 'true' - id: cloned-group resource_ids: - dummy-3 ha_cluster_resource_clones: - resource_id: simple-clone - resource_id: clone-with-options promotable: yes id: custom-clone-id meta_attrs: - attrs: - name: clone-max value: '2' - name: clone-node-max value: '1' - resource_id: cloned-group promotable: yes",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with fencing and resource operation defaults ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # Set a different resource-stickiness value during # and outside work hours. This allows resources to # automatically move back to their most # preferred hosts, but at a time that # does not interfere with business activities. ha_cluster_resource_defaults: meta_attrs: - id: core-hours rule: date-spec hours=9-16 weekdays=1-5 score: 2 attrs: - name: resource-stickiness value: INFINITY - id: after-hours score: 1 attrs: - name: resource-stickiness value: 0 # Default the timeout on all 10-second-interval # monitor actions on IPaddr2 resources to 8 seconds. ha_cluster_resource_operation_defaults: meta_attrs: - rule: resource ::IPaddr2 and op monitor interval=10s score: INFINITY attrs: - name: timeout value: 8s",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password> fence1_password: <fence1_password> fence2_password: <fence2_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Configure a cluster that defines fencing levels ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_resource_primitives: - id: apc1 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc1.example.com - name: username value: user - name: password value: \"{{ fence1_password }}\" - name: pcmk_host_map value: node1:1;node2:2 - id: apc2 agent: 'stonith:fence_apc_snmp' instance_attrs: - attrs: - name: ip value: apc2.example.com - name: username value: user - name: password value: \"{{ fence2_password }}\" - name: pcmk_host_map value: node1:1;node2:2 # Nodes have redundant power supplies, apc1 and apc2. Cluster must # ensure that when attempting to reboot a node, both power # supplies # are turned off before either power supply is turned # back on. ha_cluster_stonith_levels: - level: 1 target: node1 resource_ids: - apc1 - apc2 - level: 1 target: node2 resource_ids: - apc1 - apc2",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster with resource constraints ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true # In order to use constraints, we need resources # the constraints will apply to. ha_cluster_resource_primitives: - id: xvm-fencing agent: 'stonith:fence_xvm' instance_attrs: - attrs: - name: pcmk_host_list value: node1 node2 - id: dummy-1 agent: 'ocf:pacemaker:Dummy' - id: dummy-2 agent: 'ocf:pacemaker:Dummy' - id: dummy-3 agent: 'ocf:pacemaker:Dummy' - id: dummy-4 agent: 'ocf:pacemaker:Dummy' - id: dummy-5 agent: 'ocf:pacemaker:Dummy' - id: dummy-6 agent: 'ocf:pacemaker:Dummy' # location constraints ha_cluster_constraints_location: # resource ID and node name - resource: id: dummy-1 node: node1 options: - name: score value: 20 # resource pattern and node name - resource: pattern: dummy-\\d+ node: node1 options: - name: score value: 10 # resource ID and rule - resource: id: dummy-2 rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28' # resource pattern and rule - resource: pattern: dummy-\\d+ rule: node-type eq weekend and date-spec weekdays=6-7 # colocation constraints ha_cluster_constraints_colocation: # simple constraint - resource_leader: id: dummy-3 resource_follower: id: dummy-4 options: - name: score value: -5 # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 - resource_ids: - dummy-5 - dummy-6 options: - name: sequential value: \"false\" options: - name: score value: 20 # order constraints ha_cluster_constraints_order: # simple constraint - resource_first: id: dummy-1 resource_then: id: dummy-6 options: - name: symmetrical value: \"false\" # set constraint - resource_sets: - resource_ids: - dummy-1 - dummy-2 options: - name: require-all value: \"false\" - name: sequential value: \"false\" - resource_ids: - dummy-3 - resource_ids: - dummy-4 - dummy-5 options: - name: sequential value: \"false\" # ticket constraints ha_cluster_constraints_ticket: # simple constraint - resource: id: dummy-1 ticket: ticket1 options: - name: loss-policy value: stop # set constraint - resource_sets: - resource_ids: - dummy-3 - dummy-4 - dummy-5 ticket: ticket2 options: - name: loss-policy value: fence",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that configures Corosync values ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_transport: type: knet options: - name: ip_version value: ipv4-6 - name: link_mode value: active links: - - name: linknumber value: 1 - name: link_priority value: 5 - - name: linknumber value: 0 - name: link_priority value: 10 compression: - name: level value: 5 - name: model value: zlib crypto: - name: cipher value: none - name: hash value: none ha_cluster_totem: options: - name: block_unlisted_ips value: 'yes' - name: send_join value: 0 ha_cluster_quorum: options: - name: auto_tie_breaker value: 1 - name: wait_for_all value: 1",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"--- - name: Create a high availability cluster that uses SBD node fencing hosts: node1 node2 roles: - rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: <password> ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_sbd_enabled: yes ha_cluster_sbd_options: - name: delay-start value: 'no' - name: startmode value: always - name: timeout-action value: 'flush,reboot' - name: watchdog-timeout value: 30 # Suggested optimal values for SBD timeouts: # watchdog-timeout * 2 = msgwait-timeout (set automatically) # msgwait-timeout * 1.2 = stonith-timeout ha_cluster_cluster_properties: - attrs: - name: stonith-timeout value: 72 ha_cluster_resource_primitives: - id: fence_sbd agent: 'stonith:fence_sbd' instance_attrs: - attrs: # taken from host_vars - name: devices value: \"{{ ha_cluster.sbd_devices | join(',') }}\" - name: pcmk_delay_base value: 30",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Configure a host with a quorum device hosts: nodeQ vars_files: - vault.yml tasks: - name: Create a quorum device for the cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_present: false ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_qnetd: present: true",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook-qdevice.yml",
"ansible-playbook --ask-vault-pass ~/playbook-qdevice.yml",
"--- - name: Configure a cluster to use a quorum device hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create cluster that uses a quorum device ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_quorum: device: model: net model_options: - name: host value: nodeQ - name: algorithm value: lms",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook-cluster-qdevice.yml",
"ansible-playbook --ask-vault-pass ~/playbook-cluster-qdevice.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: node1 node2 vars_files: - vault.yml tasks: - name: Create a cluster that defines node attributes ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_cluster_name: my-new-cluster ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_node_options: - node_name: node1 attributes: - attrs: - name: attribute1 value: value1A - name: attribute2 value: value2A - node_name: node2 attributes: - attrs: - name: attribute1 value: value1B - name: attribute2 value: value2B",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"cluster_password: <cluster_password>",
"--- - name: Create a high availability cluster hosts: z1.example.com z2.example.com vars_files: - vault.yml tasks: - name: Configure active/passive Apache server in a high availability cluster ansible.builtin.include_role: name: rhel-system-roles.ha_cluster vars: ha_cluster_hacluster_password: \"{{ cluster_password }}\" ha_cluster_cluster_name: my_cluster ha_cluster_manage_firewall: true ha_cluster_manage_selinux: true ha_cluster_fence_agent_packages: - fence-agents-apc-snmp ha_cluster_resource_primitives: - id: myapc agent: stonith:fence_apc_snmp instance_attrs: - attrs: - name: ipaddr value: zapc.example.com - name: pcmk_host_map value: z1.example.com:1;z2.example.com:2 - name: login value: apc - name: passwd value: apc - id: my_lvm agent: ocf:heartbeat:LVM-activate instance_attrs: - attrs: - name: vgname value: my_vg - name: vg_access_mode value: system_id - id: my_fs agent: Filesystem instance_attrs: - attrs: - name: device value: /dev/my_vg/my_lv - name: directory value: /var/www - name: fstype value: xfs - id: VirtualIP agent: IPaddr2 instance_attrs: - attrs: - name: ip value: 198.51.100.3 - name: cidr_netmask value: 24 - id: Website agent: apache instance_attrs: - attrs: - name: configfile value: /etc/httpd/conf/httpd.conf - name: statusurl value: http://127.0.0.1/server-status ha_cluster_resource_groups: - id: apachegroup resource_ids: - my_lvm - my_fs - VirtualIP - Website",
"ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true",
"/usr/bin/test -f /var/run/httpd-Website.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /var/run/httpd-Website.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /var/run/httpd-Website.pid\" -k graceful > /dev/null 2>/dev/null || true",
"/usr/bin/test -f /run/httpd.pid >/dev/null 2>/dev/null && /usr/bin/ps -q USD(/usr/bin/cat /run/httpd.pid) >/dev/null 2>/dev/null && /usr/sbin/httpd -f /etc/httpd/conf/httpd.conf -c \"PidFile /run/httpd.pid\" -k graceful > /dev/null 2>/dev/null || true",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com",
"Hello",
"pcs node standby z1.example.com",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM-activate): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com",
"pcs node unstandby z1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/configuring-a-high-availability-cluster-by-using-the-ha-cluster-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
8.5. Viewing Tokens | 8.5. Viewing Tokens To view a list of the tokens currently installed for a Certificate System instance, use the modutil utility. Change to the instance alias directory. For example: Show the information about the installed PKCS #11 modules installed as well as information on the corresponding tokens using the modutil tool. | [
"cd /var/lib/pki/pki-tomcat/alias",
"modutil -dbdir . -nocertdb -list"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Viewing_Tokens |
14.6. Samba Network Browsing | 14.6. Samba Network Browsing Network browsing is a concept that enables Windows and Samba servers to appear in the Windows Network Neighborhood . Inside the Network Neighborhood , icons are represented as servers and if opened, the server's shares and printers that are available are displayed. Network browsing capabilities require NetBIOS over TCP/IP. NetBIOS-based networking uses broadcast (UDP) messaging to accomplish browse list management. Without NetBIOS and WINS as the primary method for TCP/IP hostname resolution, other methods such as static files ( /etc/hosts ) or DNS, must be used. A domain master browser collates the browse lists from local master browsers on all subnets so that browsing can occur between workgroups and subnets. Also, the domain master browser should preferably be the local master browser for its own subnet. 14.6.1. Workgroup Browsing For each workgroup, there must be one and only one domain master browser. You can have one local master browser per subnet without a domain master browser, but this results in isolated workgroups unable to see each other. To resolve NetBIOS names in cross-subnet workgroups, WINS is required. Note The Domain Master Browser can be the same machine as the WINS server. There can only be one domain master browser per workgroup name. Here is an example of the smb.conf file in which the Samba server is a domain master browser: is an example of the smb.conf file in which the Samba server is a local master browser: The os level directive operates as a priority system for master browsers in a subnet. Setting different values ensures master browsers do not conflict with each other for authority. Note Lowering the os level directive results in Samba conflicting with other master browsers on the same subnet. The higher the value, the higher the priority. The highest a Windows server can operate at is 32. This is a good way of tuning multiple local master browsers. There are instances when a Windows NT machine on the subnet could be the local master browser. The following is an example smb.conf configuration in which the Samba server is not serving in any browsing capacity: Warning Having multiple local master browsers result in each server competing for browsing election requests. Make sure there is only one local master browser per subnet. | [
"[global] domain master = Yes local master = Yes preferred master = Yes os level = 35",
"[global] domain master = no local master = Yes preferred master = Yes os level = 35",
"[global] domain master = no local master = no preferred master = no os level = 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-network-browsing |
17.9. Managing a Virtual Network | 17.9. Managing a Virtual Network To configure a virtual network on your system: From the Edit menu, select Connection Details . This will open the Connection Details menu. Click the Virtual Networks tab. Figure 17.10. Virtual network configuration All available virtual networks are listed on the left of the menu. You can edit the configuration of a virtual network by selecting it from this box and editing as you see fit. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Virtual_Networking-Managing_a_virtual_network |
Chapter 4. Security and Authentication of HawtIO | Chapter 4. Security and Authentication of HawtIO HawtIO enables authentication out of the box depending on the runtimes/containers it runs with. To use HawtIO with your application, either setting up authentication for the runtime or disabling HawtIO authentication is necessary. 4.1. Configuration properties The following table lists the Security-related configuration properties for the HawtIO core system. Name Default Description hawtio.authenticationContainerDiscoveryClasses io.hawt.web.tomcat.TomcatAuthenticationContainerDiscovery List of used AuthenticationContainerDiscovery implementations separated by a comma. By default, there is just TomcatAuthenticationContainerDiscovery, which is used to authenticate users on Tomcat from tomcat-users.xml file. Feel free to remove it if you want to authenticate users on Tomcat from the configured JAAS login module or feel free to add more classes of your own. hawtio.authenticationContainerTomcatDigestAlgorithm NONE When using the Tomcat tomcat-users.xml file, passwords can be hashed instead of plain text. Use this to specify the digest algorithm; valid values are NONE, MD5, SHA, SHA-256, SHA-384, and SHA-512. hawtio.authenticationEnabled true Whether or not security is enabled. hawtio.keycloakClientConfig classpath:keycloak.json Keycloak configuration file used for the front end. It is mandatory if Keycloak integration is enabled. hawtio.keycloakEnabled false Whether to enable or disable Keycloak integration. hawtio.noCredentials401 false Whether to return HTTP status 401 when authentication is enabled, but no credentials have been provided. Returning 401 will cause the browser popup window to prompt for credentials. By default this option is false, returning HTTP status 403 instead. hawtio.realm hawtio The security realm used to log in. hawtio.rolePrincipalClasses Fully qualified principal class name(s). A comma can separate multiple classes. hawtio.roles Admin, manager, viewer The user roles are required to log in to the console. A comma can separate multiple roles to allow. Set to * or an empty value to disable role checking when HawtIO authenticates a user. hawtio.tomcatUserFileLocation conf/tomcat-users.xml Specify an alternative location for the tomcat-users.xml file, e.g. /production/userlocation/. 4.2. Quarkus HawtIO is secured with the authentication mechanisms that Quarkus and also Keycloak provide. If you want to disable HawtIO authentication for Quarkus, add the following configuration to application.properties : quarkus.hawtio.authenticationEnabled = false 4.2.1. Quarkus authentication mechanisms HawtIO is just a web application in terms of Quarkus, so the various mechanisms Quarkus provides are used to authenticate HawtIO in the same way it authenticates a Web application. Here we show how you can use the properties-based authentication with HawtIO for demonstrating purposes. Important The properties-based authentication is not recommended for use in production. This mechanism is for development and testing purposes only. To use the properties-based authentication with HawtIO, add the following dependency to pom.xml : <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency> You can then define users in application.properties to enable the authentication. For example, defining a user hawtio with password s3cr3t! and role admin would look like the following: quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin Example: See Quarkus example for a working example of the properties-based authentication. 4.2.2. Quarkus with Keycloak See Keycloak Integration - Quarkus . 4.3. Spring Boot In addition to the standard JAAS authentication, HawtIO on Spring Boot can be secured through Spring Security or Keycloak . If you want to disable HawtIO authentication for Spring Boot, add the following configuration to application.properties : hawtio.authenticationEnabled = false 4.3.1. Spring Security To use Spring Security with HawtIO: Add org.springframework.boot:spring-boot-starter-security to the dependencies in pom.xml : <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Spring Security configuration in src/main/resources/application.properties should look like the following: spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer A security config class has to be defined to set up how to secure the application with Spring Security: @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated() .and() .formLogin() .and() .httpBasic() .and() .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()); return http.build(); } } Example: See springboot-security example for a working example. 4.3.1.1. Connecting to a remote application with Spring Security If you try to connect to a remote Spring Boot application with Spring Security enabled, make sure the Spring Security configuration allows access from the HawtIO console. Most likely, the default CSRF protection prohibits remote access to the Jolokia endpoint and thus causes authentication failures at the HawtIO console. Warning Be aware that it will expose your application to the risk of CSRF attacks. The easiest solution is to disable CSRF protection for the Jolokia endpoint at the remote application as follows. import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { ... // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } } To secure the Jolokia endpoint even without Spring Security's CSRF protection, you need to provide a jolokia-access.xml file under src/main/resources/ like the following (snippet) so that only trusted nodes can access it: <restrict> ... <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict> 4.3.2. Spring Boot with Keycloak See Keycloak Integration - Spring Boot . | [
"quarkus.hawtio.authenticationEnabled = false",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency>",
"quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin",
"hawtio.authenticationEnabled = false",
"<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>",
"spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer",
"@EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.authorizeRequests().anyRequest().authenticated() .and() .formLogin() .and() .httpBasic() .and() .csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()); return http.build(); } }",
"import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } }",
"<restrict> <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/hawtio_diagnostic_console_guide/security-and-authentication-of-hawtio |
13.9. Single-application Mode | 13.9. Single-application Mode Single-application mode is a modified shell which reconfigures the shell into an interactive kiosk. The administrator locks down some behavior to make the standard desktop more restrictive for the user, letting them focus on selected features. Set up single-application mode for a wide range of functions in a number of fields (from communication to entertainment or education) and use it as a self-serve machine, event manager, registration point, etc. Procedure 13.9. Set Up Single-application Mode Create the following files with the following content: /usr/bin/redhat-kiosk Important The /usr/bin/redhat-kiosk file must be executable. Replace the gedit ~/.local/bin/redhat-kiosk code by the commands that you want to execute in the kiosk session. This example launches a full-screen application designed for the kiosk deployment named http://mine-kios-web-app : /usr/share/applications/com.redhat.Kiosk.Script.desktop /usr/share/applications/com.redhat.Kiosk.WindowManager.desktop /usr/share/gnome-session/sessions/redhat-kiosk.session /usr/share/xsessions/com.redhat.Kiosk.desktop Restart the GDM service: Create a separate user for the kiosk session and select Kiosk as the session type for the user of the kiosk session. Figure 13.1. Selecting the kiosk session By starting the Kiosk session, the user launches a full screen application designed for the kiosk deployment. | [
"#!/bin/sh if [ ! -e ~/.local/bin/redhat-kiosk ]; then mkdir -p ~/.local/bin ~/.config cat > ~/.local/bin/redhat-kiosk << EOF #!/bin/sh This script is located in ~/.local/bin. It's provided as an example script to show how the kiosk session works. At the moment, the script just starts a text editor open to itself, but it should get customized to instead start a full screen application designed for the kiosk deployment. The \"while true\" bit just makes sure the application gets restarted if it dies for whatever reason. while true; do gedit ~/.local/bin/redhat-kiosk done EOF chmod +x ~/.local/bin/redhat-kiosk touch ~/.config/gnome-initial-setup-done fi exec ~/.local/bin/redhat-kiosk \"USD@\"",
"[...] while true; do firefox --kiosk http://mine-kios-web-app done [...]",
"[Desktop Entry] Name=Kiosk Type=Application Exec=redhat-kiosk",
"[Desktop Entry] Type=Application Name=Mutter Comment=Window manager Exec=/usr/bin/mutter Categories=GNOME;GTK;Core; OnlyShowIn=GNOME; NoDisplay=true X-GNOME-Autostart-Phase=DisplayServer X-GNOME-Provides=windowmanager; X-GNOME-Autostart-Notify=true X-GNOME-AutoRestart=false X-GNOME-HiddenUnderSystemd=true",
"[GNOME Session] Name=Kiosk RequiredComponents=com.redhat.Kiosk.WindowManager;com.redhat.Kiosk.Script;",
"[Desktop Entry] Name=Kiosk Comment=Kiosk mode Exec=/usr/bin/gnome-session --session=redhat-kiosk DesktopNames=Red-Hat-Kiosk;GNOME;",
"systemctl restart gdm.service"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/single-application-ode |
6.2. IPsec | 6.2. IPsec Red Hat Enterprise Linux supports IPsec for connecting remote hosts and networks to each other using a secure tunnel on a common carrier network such as the Internet. IPsec can be implemented using a host-to-host (one computer workstation to another) or network-to-network (one LAN/WAN to another). The IPsec implementation in Red Hat Enterprise Linux uses Internet Key Exchange ( IKE ), which is a protocol implemented by the Internet Engineering Task Force ( IETF ) to be used for mutual authentication and secure associations between connecting systems. An IPsec connection is split into two logical phases. In phase 1, an IPsec node initializes the connection with the remote node or network. The remote node/network checks the requesting node's credentials and both parties negotiate the authentication method for the connection. On Red Hat Enterprise Linux systems, an IPsec connection uses the pre-shared key method of IPsec node authentication. In a pre-shared key IPsec connection, both hosts must use the same key in order to move to the second phase of the IPsec connection. Phase 2 of the IPsec connection is where the security association ( SA ) is created between IPsec nodes. This phase establishes an SA database with configuration information, such as the encryption method, secret session key exchange parameters, and more. This phase manages the actual IPsec connection between remote nodes and networks. The Red Hat Enterprise Linux implementation of IPsec uses IKE for sharing keys between hosts across the Internet. The racoon keying daemon handles the IKE key distribution and exchange. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-vpn-ipsec |
Chapter 2. Cluster Observability Operator overview | Chapter 2. Cluster Observability Operator overview The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system. The COO deploys the following monitoring components: Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Thanos Querier (optional) - Enables querying of Prometheus instances from a central location. Alertmanager (optional) - Provides alert configuration capabilities for different services. UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. 2.1. COO compared to default monitoring stack The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. The key differences between COO and the default in-cluster monitoring stack are shown in the following table: Feature COO Default monitoring stack Scope and integration Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. However, it lacks direct integration with OpenShift Container Platform and typically requires an external Grafana instance for dashboards. Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. There is deep integration into OpenShift Container Platform including console dashboards and alert management in the console. Configuration and customization Broader configuration options including data retention periods, storage methods, and collected data types. The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Built-in configurations with limited customization options. Data retention and storage Long-term data retention, supporting historical analysis and capacity planning Shorter data retention times, focusing on short-term monitoring and real-time detection. 2.2. Key advantages of using COO Deploying COO helps you address monitoring requirements that are hard to achieve using the default monitoring stack. 2.2.1. Extensibility You can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. You can receive cluster-specific metrics from core platform monitoring through federation. COO supports advanced monitoring scenarios like trend forecasting and anomaly detection. 2.2.2. Multi-tenancy support You can create monitoring stacks per user namespace. You can deploy multiple stacks per namespace or a single stack for multiple namespaces. COO enables independent configuration of alerts and receivers for different teams. 2.2.3. Scalability Supports multiple monitoring stacks on a single cluster. Enables monitoring of large clusters through manual sharding. Addresses cases where metrics exceed the capabilities of a single Prometheus instance. 2.2.4. Flexibility Decoupled from OpenShift Container Platform release cycles. Faster release iterations and rapid response to changing requirements. Independent management of alerting rules. 2.3. Target users for COO COO is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. 2.3.1. Enterprise-level users and administrators Enterprise users require in-depth monitoring capabilities for OpenShift Container Platform clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. 2.3.2. Operations teams in multi-tenant environments With multi-tenancy support, COO allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. 2.3.3. Development and operations teams COO provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. 2.4. Using Server-Side Apply to customize Prometheus resources Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. Server-Side Apply Declarative configuration management by updating a resource's state without needing to delete and recreate it. Field management Users can specify which fields of a resource they want to update, without affecting the other fields. Managed fields Kubernetes stores metadata about who manages each field of an object in the managedFields field within metadata. Conflicts If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. Merge strategy Server-Side Apply merges fields based on the actor who manages them. Procedure Add a MonitoringStack resource using the following configuration: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo A Prometheus resource named sample-monitoring-stack is generated in the coo-demo namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields Example output managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{"type":"Reconciled"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{"shardID":"0"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status Check the metadata.managedFields values, and observe that some fields in metadata and spec are managed by the MonitoringStack resource. Modify a field that is not controlled by the MonitoringStack resource: Change spec.enforcedSampleLimit , which is a field not set by the MonitoringStack resource. Create the file prom-spec-edited.yaml : prom-spec-edited.yaml apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000 Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Note You must use the --server-side flag. Get the changed Prometheus object and note that there is one more section in managedFields which has spec.enforcedSampleLimit : USD oc get prometheus -n coo-demo Example output managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply 1 managedFields 2 spec.enforcedSampleLimit Modify a field that is managed by the MonitoringStack resource: Change spec.LogLevel , which is a field managed by the MonitoringStack resource, using the following YAML configuration: # changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1 1 spec.logLevel has been added Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Example output error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts Notice that the field spec.logLevel cannot be changed using Server-Side Apply, because it is already managed by observability-operator . Use the --force-conflicts flag to force the change. USD oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts Example output prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied With --force-conflicts flag, the field can be forced to change, but since the same field is also managed by the MonitoringStack resource, the Observability Operator detects the change, and reverts it back to the value set by the MonitoringStack resource. Note Some Prometheus fields generated by the MonitoringStack resource are influenced by the fields in the MonitoringStack spec stanza, for example, logLevel . These can be changed by changing the MonitoringStack spec . To change the logLevel in the Prometheus object, apply the following YAML to change the MonitoringStack resource: apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info To confirm that the change has taken place, query for the log level by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' Example output info Note If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. For example, you are managing a field enforcedSampleLimit which is not generated by the MonitoringStack resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for enforcedSampleLimit , this will overide the value you have previously set. The Prometheus object generated by the MonitoringStack resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. Additional resources Kubernetes documentation for Server-Side Apply (SSA) | [
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo",
"oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields",
"managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status",
"apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"oc get prometheus -n coo-demo",
"managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply",
"changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts",
"oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts",
"prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info",
"oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'",
"info"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cluster_observability_operator/cluster-observability-operator-overview |
Chapter 7. Working with containers | Chapter 7. Working with containers 7.1. Understanding Containers The basic units of OpenShift Dedicated applications are called containers . Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Dedicated and Kubernetes add the ability to orchestrate containers across multi-host installations. 7.1.1. About containers and RHEL kernel memory Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache , and a node cache for any NUMA nodes. These caches all consume kernel memory. The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Dedicated containers to exceed the configured memory limits, resulting in the container being killed. To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache , where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements: USD(nproc) X 1/2 MiB 7.1.2. About the container engine and container runtime A container engine is a piece of software that processes user requests, including command line options and image pulls. The container engine uses a container runtime , also called a lower-level container runtime , to run and manage the components required to deploy and operate containers. You likely will not need to interact with the container engine or container runtime. Note The OpenShift Dedicated documentation uses the term container runtime to refer to the lower-level container runtime. Other documentation can refer to the container engine as the container runtime. OpenShift Dedicated uses CRI-O as the container engine and crun or runC as the container runtime. The default container runtime is crun. 7.2. Using Init Containers to perform tasks before a pod is deployed OpenShift Dedicated provides init containers , which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image. 7.2.1. Understanding Init Containers You can use an Init Container resource to perform tasks before the rest of a pod is deployed. A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code. An Init Container can: Contain and run utilities that are not desirable to include in the app Container image for security reasons. Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup. Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access. Each Init Container must complete successfully before the one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met. For example, the following are some ways you can use Init Containers: Wait for a service to be created with a shell command like: for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Register this pod with a remote server from the downward API with a command like: USD curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()' Wait for some time before starting the app Container with a command like sleep 60 . Clone a git repository into a volume. Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. See the Kubernetes documentation for more information. 7.2.2. Creating Init Containers The following example outlines a simple pod which has two Init Containers. The first waits for myservice and the second waits for mydb . After both containers complete, the pod begins. Procedure Create the pod for the Init Container: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod: USD oc create -f myapp.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s The pod status, Init:0/2 , indicates it is waiting for the two services. Create the myservice service. Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 Create the pod: USD oc create -f myservice.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s The pod status, Init:1/2 , indicates it is waiting for one service, in this case the mydb service. Create the mydb service: Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the pod: USD oc create -f mydb.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m The pod status indicated that it is no longer waiting for the services and is running. 7.3. Using volumes to persist container data Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod. 7.3.1. Understanding volumes Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared. To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Dedicated invokes the fsck utility prior to the mount utility. This occurs when either adding a volume or updating an existing volume. The simplest volume type is emptyDir , which is a temporary directory on a single machine. Administrators may also allow you to request a persistent volume that is automatically attached to your pods. Note emptyDir volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. 7.3.2. Working with volumes using the OpenShift Dedicated CLI You can use the CLI command oc set volume to add and remove volumes and volume mounts for any object that has a pod template like replication controllers or deployment configs. You can also list volumes in pods or any object that has a pod template. The oc set volume command uses the following general syntax: USD oc set volume <object_selection> <operation> <mandatory_parameters> <options> Object selection Specify one of the following for the object_selection parameter in the oc set volume command: Table 7.1. Object Selection Syntax Description Example <object_type> <name> Selects <name> of type <object_type> . deploymentConfig registry <object_type> / <name> Selects <name> of type <object_type> . deploymentConfig/registry <object_type> --selector= <object_label_selector> Selects resources of type <object_type> that matched the given label selector. deploymentConfig --selector="name=registry" <object_type> --all Selects all resources of type <object_type> . deploymentConfig --all -f or --filename= <file_name> File name, directory, or URL to file to use to edit the resource. -f registry-deployment-config.json Operation Specify --add or --remove for the operation parameter in the oc set volume command. Mandatory parameters Any mandatory parameters are specific to the selected operation and are discussed in later sections. Options Any options are specific to the selected operation and are discussed in later sections. 7.3.3. Listing volumes and volume mounts in a pod You can list volumes and volume mounts in pods or pod templates: Procedure To list volumes: USD oc set volume <object_type>/<name> [options] List volume supported options: Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' For example: To list all volumes for pod p1 : USD oc set volume pod/p1 To list volume v1 defined on all deployment configs: USD oc set volume dc --all --name=v1 7.3.4. Adding volumes to a pod You can add volumes and volume mounts to a pod. Procedure To add a volume, a volume mount, or both to pod templates: USD oc set volume <object_type>/<name> --add [options] Table 7.2. Supported Options for Adding Volumes Option Description Default --name Name of the volume. Automatically generated, if not specified. -t, --type Name of the volume source. Supported values: emptyDir , hostPath , secret , configmap , persistentVolumeClaim or projected . emptyDir -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' -m, --mount-path Mount path inside the selected containers. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --path Host path. Mandatory parameter for --type=hostPath . Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --secret-name Name of the secret. Mandatory parameter for --type=secret . --configmap-name Name of the configmap. Mandatory parameter for --type=configmap . --claim-name Name of the persistent volume claim. Mandatory parameter for --type=persistentVolumeClaim . --source Details of volume source as a JSON string. Recommended if the desired volume source is not supported by --type . -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To add a new volume source emptyDir to the registry DeploymentConfig object: USD oc set volume dc/registry --add Tip You can alternatively apply the following YAML to add the volume: Example 7.1. Sample deployment config with an added volume kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP 1 Add the volume source emptyDir . To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data : USD oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data Tip You can alternatively apply the following YAML to add the volume: Example 7.2. Sample replication controller with added volume and secret kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data 1 Add the volume and secret. 2 Add the container mount path. To add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data , and update the DeploymentConfig object on the server: USD oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1 Tip You can alternatively apply the following YAML to add the volume: Example 7.3. Sample deployment config with persistent volume added kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data 1 Add the persistent volume claim named `pvc1. 2 Add the container mount path. To add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers: USD oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}' 7.3.5. Updating volumes and volume mounts in a pod You can modify the volumes and volume mounts in a pod. Procedure Updating existing volumes using the --overwrite option: USD oc set volume <object_type>/<name> --add --overwrite [options] For example: To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1 : USD oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 Tip You can alternatively apply the following YAML to replace the volume: Example 7.4. Sample replication controller with persistent volume claim named pvc1 kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data 1 Set persistent volume claim to pvc1 . To change the DeploymentConfig object d1 mount point to /opt for volume v1 : USD oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt Tip You can alternatively apply the following YAML to change the mount point: Example 7.5. Sample deployment config with mount point set to opt . kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt 1 Set the mount point to /opt . 7.3.6. Removing volumes and volume mounts from a pod You can remove a volume or volume mount from a pod. Procedure To remove a volume from pod templates: USD oc set volume <object_type>/<name> --remove [options] Table 7.3. Supported options for removing volumes Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' --confirm Indicate that you want to remove multiple volumes at once. -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To remove a volume v1 from the DeploymentConfig object d1 : USD oc set volume dc/d1 --remove --name=v1 To unmount volume v1 from container c1 for the DeploymentConfig object d1 and remove the volume v1 if it is not referenced by any containers on d1 : USD oc set volume dc/d1 --remove --name=v1 --containers=c1 To remove all volumes for replication controller r1 : USD oc set volume rc/r1 --remove --confirm 7.3.7. Configuring volumes for multiple uses in a pod You can configure a volume to share one volume for multiple uses in a single pod using the volumeMounts.subPath property to specify a subPath value inside a volume instead of the volume's root. Note You cannot add a subPath parameter to an existing scheduled pod. Procedure To view the list of files in the volume, run the oc rsh command: USD oc rsh <pod> Example output sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3 Specify the subPath : Example Pod spec with subPath parameter apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data 1 Databases are stored in the mysql folder. 2 HTML content is stored in the html folder. 7.4. Mapping volumes using projected volumes A projected volume maps several existing volume sources into the same directory. The following types of volume sources can be projected: Secrets Config Maps Downward API Note All sources are required to be in the same namespace as the pod. 7.4.1. Understanding projected volumes Projected volumes can map any combination of these volume sources into a single directory, allowing the user to: automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information; populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume. Important When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is unable to correctly set ownership on the files in the projected volume. Therefore, the RunAsUsername permission set in the security context of a Windows pod is not honored for Windows projected volumes running in OpenShift Dedicated. The following general scenarios show how you can use projected volumes. Config map, secrets, Downward API. Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector metadata.labels can be used to produce the correct RHOSP configs. Config map + secrets. Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file. ConfigMap + Downward API. Projected volumes allow you to generate a config including the pod name (available via the metadata.name selector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. Secrets + Downward API. Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the metadata.namespace selector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport. 7.4.1.1. Example Pod specs The following are examples of Pod specs for creating projected volumes. Pod with a secret, a Downward API, and a config map apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: "/projected-volume" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "cpu_limit" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11 1 Add a volumeMounts section for each container that needs the secret. 2 Specify a path to an unused directory where the secret will appear. 3 Set readOnly to true . 4 Add a volumes block to list each projected volume source. 5 Specify any name for the volume. 6 Set the execute permission on the files. 7 Add a secret. Enter the name of the secret object. Each secret you want to use must be listed. 8 Specify the path to the secrets file under the mountPath . Here, the secrets file is in /projected-volume/my-group/my-username . 9 Add a Downward API source. 10 Add a ConfigMap source. 11 Set the mode for the specific projection Note If there are multiple containers in the pod, each container needs a volumeMounts section, but only one volumes section is needed. Pod with multiple secrets with a non-default permission mode set apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511 Note The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection. 7.4.1.2. Pathing Considerations Collisions Between Keys when Configured Paths are Identical If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for mysecret and myconfigmap are the same: apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data Consider the following situations related to the volume file paths. Collisions Between Keys without Configured Paths The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well). Collisions when One Path is Explicit and the Other is Automatically Projected In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before 7.4.2. Configuring a Projected Volume for a Pod When creating projected volumes, consider the volume file path situations described in Understanding projected volumes . The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory. The user name and password values can be any valid string that is base64 encoded. The following example shows admin in base64: USD echo -n "admin" | base64 Example output YWRtaW4= The following example shows the password 1f2d1e2e67df in base64: USD echo -n "1f2d1e2e67df" | base64 Example output MWYyZDFlMmU2N2Rm Procedure To use a projected volume to mount an existing secret volume source. Create the secret: Create a YAML file similar to the following, replacing the password and user information as appropriate: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= Use the following command to create the secret: USD oc create -f <secrets-filename> For example: USD oc create -f secret.yaml Example output secret "mysecret" created You can check that the secret was created using the following commands: USD oc get secret <secret-name> For example: USD oc get secret mysecret Example output NAME TYPE DATA AGE mysecret Opaque 2 17h USD oc get secret <secret-name> -o yaml For example: USD oc get secret mysecret -o yaml apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque Create a pod with a projected volume. Create a YAML file similar to the following, including a volumes section: kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1 1 The name of the secret you created. Create the pod from the configuration file: USD oc create -f <your_yaml_file>.yaml For example: USD oc create -f secret-pod.yaml Example output pod "test-projected-volume" created Verify that the pod container is running, and then watch for changes to the pod: USD oc get pod <name> For example: USD oc get pod test-projected-volume The output should appear similar to the following: Example output NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s In another terminal, use the oc exec command to open a shell to the running container: USD oc exec -it <pod> <command> For example: USD oc exec -it test-projected-volume -- /bin/sh In your shell, verify that the projected-volumes directory contains your projected sources: / # ls Example output bin home root tmp dev proc run usr etc projected-volume sys var 7.5. Allowing containers to consume API objects The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Dedicated. Such information includes the pod's name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. 7.5.1. Expose pod information to Containers using the Downward API The Downward API contains such information as the pod's name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. Fields within the pod are selected using the FieldRef API type. FieldRef has two fields: Field Description fieldPath The path of the field to select, relative to the pod. apiVersion The API version to interpret the fieldPath selector within. Currently, the valid selectors in the v1 API include: Selector Description metadata.name The pod's name. This is supported in both environment variables and volumes. metadata.namespace The pod's namespace.This is supported in both environment variables and volumes. metadata.labels The pod's labels. This is only supported in volumes and not in environment variables. metadata.annotations The pod's annotations. This is only supported in volumes and not in environment variables. status.podIP The pod's IP. This is only supported in environment variables and not volumes. The apiVersion field, if not specified, defaults to the API version of the enclosing pod template. 7.5.2. Understanding how to consume container values using the downward API You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Annotations and labels are available using only a volume plugin. 7.5.2.1. Consuming container values using environment variables When using a container's environment variables, use the EnvVar type's valueFrom field (of type EnvVarSource ) to specify that the variable's value should come from a FieldRef source instead of the literal value specified by the value field. Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are: Pod name Pod project/namespace Procedure Create a new pod spec that contains the environment variables you want the container to consume: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_POD_NAME and MY_POD_NAMESPACE values: USD oc logs -p dapi-env-test-pod 7.5.2.2. Consuming container values using a volume plugin You containers can consume API values using a volume plugin. Containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Procedure To use the volume plugin: Create a new pod spec that contains the environment variables you want the container to consume: Create a volume-pod.yaml file similar to the following: kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml Verification Check the container's logs and verify the presence of the configured fields: USD oc logs -p dapi-volume-test-pod Example output cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api 7.5.3. Understanding how to consume container resources using the Downward API When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments. You can do this using environment variable or a volume plugin. 7.5.3.1. Consuming container resources using environment variables When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables. When creating the pod configuration, specify environment variables that correspond to the contents of the resources field in the spec.container field. Note If the resource limits are not included in the container configuration, the downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml 7.5.3.2. Consuming container resources using a volume plugin When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin. When creating the pod configuration, use the spec.volumes.downwardAPI.items field to describe the desired resources that correspond to the spec.resources field. Note If the resource limits are not included in the container configuration, the Downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml 7.5.4. Consuming secrets using the Downward API When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments. Procedure Create a secret to inject: Create a secret.yaml file similar to the following: apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth Create the secret object from the secret.yaml file: USD oc create -f secret.yaml Create a pod that references the username field from the above Secret object: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_SECRET_USERNAME value: USD oc logs -p dapi-env-test-pod 7.5.5. Consuming configuration maps using the Downward API When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments. Procedure Create a config map with the values to inject: Create a configmap.yaml file similar to the following: apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue Create the config map from the configmap.yaml file: USD oc create -f configmap.yaml Create a pod that references the above config map: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_CONFIGMAP_VALUE value: USD oc logs -p dapi-env-test-pod 7.5.6. Referencing environment variables When creating pods, you can reference the value of a previously defined environment variable by using the USD() syntax. If the environment variable reference can not be resolved, the value will be left as the provided string. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_ENV_VAR_REF_ENV value: USD oc logs -p dapi-env-test-pod 7.5.7. Escaping environment variable references When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_NEW_ENV value: USD oc logs -p dapi-env-test-pod 7.6. Copying files to or from an OpenShift Dedicated container You can use the CLI to copy local files to or from a remote directory in a container using the rsync command. 7.6.1. Understanding how to copy files The oc rsync command, or remote sync, is a useful tool for copying database archives to and from your pods for backup and restore purposes. You can also use oc rsync to copy source code changes into a running pod for development debugging, when the running pod supports hot reload of source files. USD oc rsync <source> <destination> [-c <container>] 7.6.1.1. Requirements Specifying the Copy Source The source argument of the oc rsync command must point to either a local directory or a pod directory. Individual files are not supported. When specifying a pod directory the directory name must be prefixed with the pod name: <pod name>:<dir> If the directory name ends in a path separator ( / ), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination. Specifying the Copy Destination The destination argument of the oc rsync command must point to a directory. If the directory does not exist, but rsync is used for copy, the directory is created for you. Deleting Files at the Destination The --delete flag may be used to delete any files in the remote directory that are not in the local directory. Continuous Syncing on File Change Using the --watch option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever. Synchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls. When using the --watch option, the behavior is effectively the same as manually invoking oc rsync repeatedly, including any arguments normally passed to oc rsync . Therefore, you can control the behavior via the same flags used with manual invocations of oc rsync , such as --delete . 7.6.2. Copying files to and from containers Support for copying local files to or from a container is built into the CLI. Prerequisites When working with oc rsync , note the following: rsync must be installed. The oc rsync command uses the local rsync tool, if present on the client machine and the remote container. If rsync is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail. The tar copy method does not provide the same functionality as oc rsync . For example, oc rsync creates the destination directory if it does not exist and only sends files that are different between the source and the destination. Note In Windows, the cwRsync client should be installed and added to the PATH for use with the oc rsync command. Procedure To copy a local directory to a pod directory: USD oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name> For example: USD oc rsync /home/user/source devpod1234:/src -c user-container To copy a pod directory to a local directory: USD oc rsync devpod1234:/src /home/user/source Example output USD oc rsync devpod1234:/src/status.txt /home/user/ 7.6.3. Using advanced Rsync features The oc rsync command exposes fewer command line options than standard rsync . In the case that you want to use a standard rsync command line option that is not available in oc rsync , for example the --exclude-from=FILE option, it might be possible to use standard rsync 's --rsh ( -e ) option or RSYNC_RSH environment variable as a workaround, as follows: USD rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> or: Export the RSYNC_RSH variable: USD export RSYNC_RSH='oc rsh' Then, run the rsync command: USD rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> Both of the above examples configure standard rsync to use oc rsh as its remote shell program to enable it to connect to the remote pod, and are an alternative to running oc rsync . 7.7. Executing remote commands in an OpenShift Dedicated container You can use the CLI to execute remote commands in an OpenShift Dedicated container. 7.7.1. Executing remote commands in containers Support for remote container command execution is built into the CLI. Procedure To run a command in a container: USD oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>] For example: USD oc exec mypod date Example output Thu Apr 9 02:21:53 UTC 2015 Important For security purposes , the oc exec command does not work when accessing privileged containers except when the command is executed by a cluster-admin user. 7.7.2. Protocol for initiating a remote command from a client Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: /proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command> In the above URL: <node_name> is the FQDN of the node. <namespace> is the project of the target pod. <pod> is the name of the target pod. <container> is the name of the target container. <command> is the desired command to be executed. For example: /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date Additionally, the client can add parameters to the request to indicate if: the client should send input to the remote container's command (stdin). the client's terminal is a TTY. the remote container's command should send output from stdout to the client. the remote container's command should send output from stderr to the client. After sending an exec request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses HTTP/2 . The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the streamType header on the stream to one of stdin , stdout , or stderr . The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. 7.8. Using port forwarding to access applications in a container OpenShift Dedicated supports port forwarding to pods. 7.8.1. Understanding port forwarding You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. Support for port forwarding is built into the CLI: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] The CLI listens on each local port specified by the user, forwarding using the protocol described below. Ports may be specified using the following formats: 5000 The client listens on port 5000 locally and forwards to 5000 in the pod. 6000:5000 The client listens on port 6000 locally and forwards to 5000 in the pod. :5000 or 0:5000 The client selects a free local port and forwards to 5000 in the pod. OpenShift Dedicated handles port-forward requests from clients. Upon receiving a request, OpenShift Dedicated upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Dedicated receives a new stream, it copies data between the stream and the pod's port. Architecturally, there are options for forwarding to a pod's port. The supported OpenShift Dedicated implementation invokes nsenter directly on the node host to enter the pod's network namespace, then invokes socat to copy data between the stream and the pod's port. However, a custom implementation could include running a helper pod that then runs nsenter and socat , so that those binaries are not required to be installed on the host. 7.8.2. Using port forwarding You can use the CLI to port-forward one or more local ports to a pod. Procedure Use the following command to listen on the specified port in a pod: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] For example: Use the following command to listen on ports 5000 and 6000 locally and forward data to and from ports 5000 and 6000 in the pod: USD oc port-forward <pod> 5000 6000 Example output Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000 Use the following command to listen on port 8888 locally and forward to 5000 in the pod: USD oc port-forward <pod> 8888:5000 Example output Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000 Use the following command to listen on a free port locally and forward to 5000 in the pod: USD oc port-forward <pod> :5000 Example output Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000 Or: USD oc port-forward <pod> 0:5000 7.8.3. Protocol for initiating port forwarding from a client Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server: In the above URL: <node_name> is the FQDN of the node. <namespace> is the namespace of the target pod. <pod> is the name of the target pod. For example: After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2) . The client creates a stream with the port header containing the target port in the pod. All data written to the stream is delivered via the kubelet to the target pod and port. Similarly, all data sent from the pod for that forwarded connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. | [
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/nodes/working-with-containers |
14.12. Supported qemu-img Formats | 14.12. Supported qemu-img Formats When a format is specified in any of the qemu-img commands, the following format types may be used: raw - Raw disk image format (default). This can be the fastest file-based format. If your file system supports holes (for example in ext2 or ext3 ), then only the written sectors will reserve space. Use qemu-img info to obtain the real size used by the image or ls -ls on Unix/Linux. Although Raw images give optimal performance, only very basic features are available with a Raw image. For example, no snapshots are available. qcow2 - QEMU image format, the most versatile format with the best feature set. Use it to have optional AES encryption, zlib-based compression, support of multiple VM snapshots, and smaller images, which are useful on file systems that do not support holes . Note that this expansive feature set comes at the cost of performance. Although only the formats above can be used to run on a guest virtual machine or host physical machine, qemu-img also recognizes and supports the following formats in order to convert from them into either raw , or qcow2 format. The format of an image is usually detected automatically. In addition to converting these formats into raw or qcow2 , they can be converted back from raw or qcow2 to the original format. Note that the qcow2 version supplied with Red Hat Enterprise Linux 7 is 1.1. The format that is supplied with versions of Red Hat Enterprise Linux will be 0.10. You can revert image files to versions of qcow2. To know which version you are using, run qemu-img info qcow2 [imagefilename.img] command. To change the qcow version see Section 23.19.2, "Setting Target Elements" . bochs - Bochs disk image format. cloop - Linux Compressed Loop image, useful only to reuse directly compressed CD-ROM images present for example in the Knoppix CD-ROMs. cow - User Mode Linux Copy On Write image format. The cow format is included only for compatibility with versions. dmg - Mac disk image format. nbd - Network block device. parallels - Parallels virtualization disk image format. qcow - Old QEMU image format. Only included for compatibility with older versions. qed - Old QEMU image format. Only included for compatibility with older versions. vdi - Oracle VM VirtualBox hard disk image format. vhdx - Microsoft Hyper-V virtual hard disk-X disk image format. vmdk - VMware 3 and 4 compatible image format. vvfat - Virtual VFAT disk image format. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-supported_qemu_img_formats |
2.4. Battery Life Tool Kit | 2.4. Battery Life Tool Kit Red Hat Enterprise Linux 7 introduces the Battery Life Tool Kit ( BLTK ), a test suite that simulates and analyzes battery life and performance. BLTK achieves this by performing sets of tasks that simulate specific user groups and reporting on the results. Although developed specifically to test notebook performance, BLTK can also report on the performance of desktop computers when started with the -a . BLTK allows you to generate very reproducible workloads that are comparable to real use of a machine. For example, the office workload writes a text, corrects things in it, and does the same for a spreadsheet. Running BLTK combined with PowerTOP or any of the other auditing or analysis tool allows you to test if the optimizations you performed have an effect when the machine is actively in use instead of only idling. Because you can run the exact same workload multiple times for different settings, you can compare results for different settings. Install BLTK with the command: Run BLTK with the command: For example, to run the idle workload for 120 seconds: The workloads available by default are: -I , --idle system is idle, to use as a baseline for comparison with other workloads -R , --reader simulates reading documents (by default, with Firefox ) -P , --player simulates watching multimedia files from a CD or DVD drive (by default, with mplayer ) -O , --office simulates editing documents with the OpenOffice.org suite Other options allow you to specify: -a , --ac-ignore ignore whether AC power is available (necessary for desktop use) -T number_of_seconds , --time number_of_seconds the time (in seconds) over which to run the test; use this option with the idle workload -F filename , --file filename specifies a file to be used by a particular workload, for example, a file for the player workload to play instead of accessing the CD or DVD drive -W application , --prog application specifies an application to be used by a particular workload, for example, a browser other than Firefox for the reader workload BLTK supports a large number of more specialized options. For details, see the bltk man page. BLTK saves the results that it generates in a directory specified in the /etc/bltk.conf configuration file - by default, ~/.bltk/ workload .results. number / . For example, the ~/.bltk/reader.results.002/ directory holds the results of the third test with the reader workload (the first test is not numbered). The results are spread across several text files. To condense these results into a format that is easy to read, run: The results now appear in a text file named Report in the results directory. To view the results in a terminal emulator instead, use the -o option: | [
"~]# yum install bltk",
"~]USD bltk workload options",
"~]USD bltk -I -T 120",
"~]USD bltk_report path_to_results_directory",
"~]USD bltk_report -o path_to_results_directory"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/bltk |
Chapter 2. Using roles in Directory Server | Chapter 2. Using roles in Directory Server You can group Directory Server entries by using roles. Roles behave as both a static and a dynamic group. Roles are easier to use than groups because they are more flexible in their implementation. For example, an application can get the list of roles to which an entry belongs by querying the entry itself rather than selecting a group and browsing the members list of several groups. You can manage roles by using the command line or the web console . 2.1. Roles in Directory Server A role behaves as both a static and a dynamic group, similarly to a hybrid group: With a group, Directory Server adds entries to the group entry as members. With a role, Directory Server adds the role attribute to the entry and then uses this attribute to automatically identify members in the role entry. Role members are entries that possess the role. You can specify members of the role explicitly or dynamically depending on the role type. Directory Server supports the following types of roles: Managed roles Managed roles have an explicit list of members. You can use managed roles to perform the same tasks that you perform with static groups. Filtered roles You can filter the role members by using filtered roles, similarly to filtering with dynamic groups. Directory Server assigns entries to a filtered role depending on whether the entry possesses a specific attribute defined in the role. Nested roles Nested roles can contain managed and filtered roles. When you create a role, determine if users can add or remove themselves from the role. For more details, see Section 2.2, "Using roles securely in Directory Server" . Note Evaluating roles is more resource-intensive for the Directory Server than evaluating groups because the server does the work for the client application. With roles, the client application can check role membership by searching for the nsRole attribute. The nsRole attribute is a computed attribute that identifies which roles an entry belongs to. Directory Server does not store the nsRole attribute. From the client application point of view, the method for checking membership is uniform and is performed on the server side. Find considerations for using roles in [Deciding between groups and roles] in the Planning and designing a directory service . documentation. Additional resources Managing groups in Directory Server 2.2. Using roles securely in Directory Server When creating a new role, consider if users can easily add or remove themselves from a role. For example, you can allow users of the Mountain Biking interest group role to add or remove themselves easily. However, you must not allow users who are assigned the Marketing role to add or remove themselves from the role. One potential security risk is inactivating user accounts by inactivating roles. Inactive roles have special access control instructions (ACIs) defined for their suffix. If an administrator allows users to add and remove themselves from roles freely, these users can remove themselves from an inactive role to unlock their accounts. For example, a user is assigned a managed role. When Directory Server locks this managed role by using account inactivation, the user can not bind to the server because Directory Server computes the nsAccountLock attribute as true for that user. However, if the user was already bound to Directory Server and now is locked through the managed role, the user can remove the nsRoleDN attribute from his entry and unlock himself if no restricting ACIs are specified. To prevent users from removing the nsRoleDN attribute, use the following ACIs depending on the type of role: Managed roles. For entries that are members of a managed role, use the following ACI: Filtered roles. Protect attributes that are part of the filter ( nsRoleFilter ). Do not allow a user to add, delete, or modify the attribute that the filtered role uses. If Directory Server computes the value of the filter attribute, then you must protect all attributes that can modify this filter attribute value. Nested roles. A nested role can contain filtered and managed roles. Thus, you must restrict modify operations in ACIs for each attribute of the roles that the nested role contains. Additional resources Manually inactivating users and roles 2.3. Managing roles in Directory Server by using the command line You can view, create, and delete roles by using the command line. 2.3.1. Creating a managed role in Directory Server Managed roles are roles that have an explicit enumerated list of members. You can use the ldapmodify utility to create a managed role. The following example creates a managed role for a marketing team. Prerequisites The ou=people,dc=example,dc=com parent entry exists in Directory Server. The cn=Bob Jones,ou=people,dc=example,dc=com user entry exists in Directory Server. Procedure Create the cn=Marketing managed role entry by using the ldapmodify command with the -a option: The managed role entry must contain the following object classes: LDAPsubentry nsRoleDefinition nsSimpleRoleDefinition nsManagedRoleDefinition Assign the cn=Marketing,ou=people,dc=example,dc=com managed role to the cn=Bob Jones,ou=people,dc=example,dc=com user entry by adding the nsRoleDN attribute to this user entry: Optional: Configure the equality index for the nsRoleDN attribute in the userRoot database to avoid unindexed searches: Verification List user entries that now belong to the cn=Marketing,ou=people,dc=example,dc=com managed role: Additional resources Creating a role in the LDAP Browser Providing input to the ldapadd, ldapmodify, and ldapdelete utilities Maintaining the indexes of a specific database using the command line ldapmodify(1) man page 2.3.2. Creating a filtered role in Directory Server Directory Server assigns entries to a filtered role if the entries have a specific attribute defined in the role. The role definition specifies the nsRoleFilter LDAP filter. Entries that match the filter are members of the role. You can use ldapmodify utility to create a filtered role. The following example creates a filtered role for sales department managers. Prerequisites The ou=people,dc=example,dc=com parent entry exists in Directory Server. Procedure Create the cn=SalesManagerFilter filtered role entry by using the ldapmodify command with the -a option: The cn=SalesManagerFilter filtered role entry has the o=sales managers filter for the role. All user entries that have the o attribute with the value of sales managers are members of the filtered role. Example of the user entry that is now a member of the filtered role: The filtered role entry must have the following object classes: LDAPsubentry nsRoleDefinition nsComplexRoleDefinition nsFilteredRoleDefinition Optional: Configure the equality index for the attribute that you use in the nsRoleFilter role filter to avoid unindexed searches. In the given example, the role uses o=sales managers as the filter. Therefore, index the o attribute to improve the search performance: Verification List user entries that now belong to the cn=SalesManagerFilter,ou=people,dc=example,dc=com filtered role: Additional resources Creating a role in the LDAP Browser Providing input to the ldapadd, ldapmodify, and ldapdelete utilities ldapmodify(1) man page 2.3.3. Creating a nested role in Directory Server Nested roles can contain managed and filtered roles. A nested role entry requires the nsRoleDN attribute to identify the roles to nest. You can use ldapmodify utility to create a nested role. The following example creates a nested role that contains the managed and the filtered roles you created in Creating a managed role in Directory Server and Creating a filtered role in Directory Server . Prerequisites The ou=people,dc=example,dc=com parent entry exists in Directory Server. Procedure Create the cn=MarketingSales nested role entry that contains the cn=SalesManagerFilter filtered role and the cn=Marketing managed role by using the ldapmodify command with the -a option: Optionally, the role can have the description attribute. The nested role entry must have the following object classes: LDAPsubentry nsRoleDefinition nsComplexRoleDefinition nsNestedRoleDefinition Verification List user entries that now belong to the cn=MarketingSales nested role: Additional resources Creating a role in the LDAP Browser Providing input to the ldapadd, ldapmodify, and ldapdelete utilities ldapmodify(1) man page 2.3.4. Viewing roles for an entry To view roles for an entry, use the ldapsearch command with explicitly specified nsRole virtual attribute. Prerequisites Roles entry exists. You assigned roles to the uid=user_name user entry. Procedure Search for the uid= user_name entry with specified nsRole virtual attribute: The command retrieves all roles which the uid= user_name user is a member of. 2.3.5. Deleting roles in Directory Server To delete a role in Directory Server, you can use ldapmodify command. The following is an example of deleting the cn=Marketing managed role from Directory Server. Procedure To delete the cn=Marketing managed role entry, enter: Note When you delete a role, Directory Server deletes only the role entry and does not delete the nsRoleDN attribute for each role member. To delete the nsRoleDN attribute for each role member, enable the Referential Integrity plug-in and configure this plug-in to manage the nsRoleDN attribute. For more information about the Referential Integrity plug-in, see Using Referential Integrity to maintain relationships between entries . Additional resources Deleting a role in the LDAP browser ldapmodify(1) man page 2.4. Managing roles in Directory Server by using the web console You can view, create, and delete roles by using LDAP browser in the web console. 2.4.1. Creating a role in the LDAP Browser You can create a role for a Red Hat Directory Server entry by using the LDAP Browser wizard in the web console. Prerequisites Access to the web console. A parent entry exists in Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, open the LDAP Browser . Select an LDAP entry and open the Options menu. From the drop-down menu select New and click Create a new role . Follow the steps in the wizard and click the button after you complete each step. To create the role, review the role settings in the Create Role step and click the Create button. You can click the Back button to modify the role settings or click the Cancel button to cancel the role creation. To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the new role appears among the entry parameters. 2.4.2. Deleting a role in the LDAP browser You can delete the role from the Red Hat Directory Server entry by using the LDAP Browser in the web console. Prerequisites Access to the web console. A parent entry exists in Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, click LDAP browser . Expand the LDAP entry select the role which you want to delete. Open the Options menu and select Delete . Verify the data about the role you want to delete and click the button until you reach the Deletion step. Toggle the switch to the Yes, I'm sure position and click the Delete button. To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the role is no longer a part of the entry parameters. 2.4.3. Modifying a role in the LDAP browser You can modify the role parameters for a Red Hat Directory Server entry by using the LDAP Browser in the web console. Prerequisites Access to the web console. A parent entry exists in the Red Hat Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, click LDAP Browser . Expand the LDAP entry and select the role you are modifying. Click the Options menu and select Edit to modify the parameters of the role or Rename to rename the role. In the wizard window modify the necessary parameters and click after each step until you observe the LDIF Statements step. Check the updated parameters and click Modify Entry or Change Entry Name . To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the updated parameters are listed for the role. | [
"aci: (targetattr=\"nsRoleDN\") (targattrfilters= add=nsRoleDN:(!(nsRoleDN=cn=AdministratorRole,dc=example,dc=com)), del=nsRoleDN:(!(nsRoleDN=cn=nsManagedDisabledRole,dc=example,dc=com))) (version3.0;acl \"allow mod of nsRoleDN by self but not to critical values\"; allow(write) userdn=ldap:///self;)",
"ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x << EOF dn: cn=Marketing,ou=people,dc=example,dc=com objectclass: top objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsSimpleRoleDefinition objectclass: nsManagedRoleDefinition cn: Marketing description: managed role for the marketing team EOF",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x << EOF dn: cn=Bob Jones,ou=people,dc=example,dc=com changetype: modify add: nsRoleDN nsRoleDN: cn=Marketing,ou=people,dc=example,dc=com EOF modifying entry \"cn=Bob Jones,ou=people,dc=example,dc=com\"",
"dsconf -D \" cn=Directory Manager \" ldap:// server.example.com backend index add --index-type eq --attr nsroleDN --reindex userRoot",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x -b \"dc=example,dc=com\" \"(nsRole=cn=Marketing,ou=people,dc=example,dc=com)\" dn dn: cn=Bob Jones,ou=people,dc=example,dc=com dn: cn=Tom Devis,ou=people,dc=example,dc=com",
"ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x << EOF dn: cn=SalesManagerFilter,ou=people,dc=example,dc=com changetype: add objectclass: top objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsComplexRoleDefinition objectclass: nsFilteredRoleDefinition cn: SalesManagerFilter nsRoleFilter: o=sales managers Description: filtered role for sales managers EOF",
"dn: cn=Pat Smith,ou=people,dc=example,dc=com objectclass: person cn: Pat sn: Smith userPassword: password o: sales managers",
"dsconf -D \" cn=Directory Manager \" ldap:// server.example.com backend index add --index-type eq --attr o --reindex userRoot",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x -b \"dc=example,dc=com\" \"(nsRole=cn=SalesManagerFilter,ou=people,dc=example,dc=com)\" dn dn: cn=Jess Mor,ou=people,dc=example,dc=com dn: cn=Pat Smith,ou=people,dc=example,dc=com",
"ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x << EOF dn: cn=MarketingSales,ou=people,dc=example,dc=com objectclass: top objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsComplexRoleDefinition objectclass: nsNestedRoleDefinition cn: MarketingSales nsRoleDN: cn=SalesManagerFilter,ou=people,dc=example,dc=com nsRoleDN: cn=Marketing,ou=people,dc=example,dc=com EOF",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x -b \"dc=example,dc=com\" \"(nsRole=cn=MarketingSales,ou=people,dc=example,dc=com)\" dn dn: cn=Bob Jones,ou=people,dc=example,dc=com dn: cn=Pat Smith,ou=people,dc=example,dc=com dn: cn=Jess Mor,ou=people,dc=example,dc=com dn: cn=Tom Devis,ou=people,dc=example,dc=com",
"ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \"dc=example,dc=com\" -s sub -x \"(uid= user_name )\" nsRole dn: uid=user_name,ou=people,dc=example,dc=com nsRole: cn=Role for Managers,dc=example,dc=com nsRole: cn=Role for Accounting,dc=example,dc=com",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x << EOF dn: cn=Marketing,ou=People,dc=example,dc=com changetype: delete EOF deleting entry \"cn=Marketing,ou=People,dc=example,dc=com\""
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/user_management_and_authentication/assembly_using-roles-in-directory-server_user-management-and-authentication |
probe::tty.poll | probe::tty.poll Name probe::tty.poll - Called when a tty device is being polled Synopsis tty.poll Values file_name the tty file name wait_key the wait queue key | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-poll |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_quick_start_guide/con-conscious-language-message |
Chapter 19. Monitoring and Logging | Chapter 19. Monitoring and Logging Log management is an important component of monitoring the security status of your OpenStack deployment. Logs provide insight into the BAU actions of administrators, projects, and instances, in addition to the component activities that comprise your OpenStack deployment. Logs are not only valuable for proactive security and continuous compliance activities, but they are also a valuable information source for investigation and incident response. For example, analyzing the keystone access logs could alert you to failed logins, their frequency, origin IP, and whether the events are restricted to select accounts, among other pertinent information. The director includes intrusion detection capabilities using AIDE, and CADF auditing for keystone. For more information, see Hardening infrastructure and virtualization . 19.1. Harden the monitoring infrastructure Centralized logging systems are a high value target for intruders, as a successful breach could allow them to erase or tamper with the record of events. It is recommended you harden the monitoring platform with this in mind. In addition, consider making regular backups of these systems, with failover planning in the event of an outage or DoS. 19.2. Example events to monitor Event monitoring is a more proactive approach to securing an environment, providing real-time detection and response. Multiple tools exist which can aid in monitoring. For an OpenStack deployment, you will need to monitor the hardware, the OpenStack services, and the cloud resource usage. This section describes some example events you might need to be aware of. Important This list is not exhaustive. You will need to consider additional use cases that might apply to your specific network, and that you might consider anomalous behavior. Detecting the absence of log generation is an event of high value. Such a gap might indicate a service failure, or even an intruder who has temporarily switched off logging or modified the log level to hide their tracks. Application events, such as start or stop events, that were unscheduled might have possible security implications. Operating system events on the OpenStack nodes, such as user logins or restarts. These can provide valuable insight into distinguishing between proper and improper usage of systems. Networking bridges going down. This would be an actionable event due to the risk of service outage. IPtables flushing events on Compute nodes, and the resulting loss of access to instances. To reduce security risks from orphaned instances on a user, project, or domain deletion in the Identity service there is discussion to generate notifications in the system and have OpenStack components respond to these events as appropriate such as terminating instances, disconnecting attached volumes, reclaiming CPU and storage resources and so on. Security monitoring controls such as intrusion detection software, antivirus software, and spyware detection and removal utilities can generate logs that show when and how an attack or intrusion took place. These tools can provide a layer of protection when deployed on the OpenStack nodes. Project users might also want to run such tools on their instances. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_monitoring-and-logging_security_and_hardening |
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint | Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhodf |
13.2. Transferring Data Using RoCE | 13.2. Transferring Data Using RoCE RDMA over Converged Ethernet (RoCE) is a network protocol that enables remote direct memory access (RDMA) over an Ethernet network. There are two RoCE versions, RoCE v1 and RoCE v2, depending on the network adapter used. RoCE v1 The RoCE v1 protocol is an Ethernet link layer protocol with ethertype 0x8915 that enables communication between any two hosts in the same Ethernet broadcast domain. RoCE v1 is the default version for RDMA Connection Manager (RDMA_CM) when using the ConnectX-3 network adapter. RoCE v2 The RoCE v2 protocol exists on top of either the UDP over IPv4 or the UDP over IPv6 protocol. The UDP destination port number 4791 has been reserved for RoCE v2. Since Red Hat Enterprise Linux 7.5, RoCE v2 is the default version for RDMA_CM when using the ConnectX-3 Pro, ConnectX-4, ConnectX-4 Lx and ConnectX-5 network adapters. Hardware supports both RoCE v1 and RoCE v2 . RDMA Connection Manager (RDMA_CM) is used to set up a reliable connection between a client and a server for transferring data. RDMA_CM provides an RDMA transport-neutral interface for establishing connections. The communication is over a specific RDMA device, and data transfers are message-based. Prerequisites An RDMA_CM session requires one of the following: Both client and server support the same RoCE mode. A client supports RoCE v1 and a server RoCE v2. Since a client determines the mode of the connection, the following cases are possible: A successful connection: If a client is in RoCE v1 or in RoCE v2 mode depending on the network card and the driver used, the corresponding server must have the same version to create a connection. Also, the connection is successful if a client is in RoCE v1 and a server in RoCE v2 mode. A failed connection: If a client is in RoCE v2 and the corresponding server is in RoCE v1, no connection can be established. In this case, update the driver or the network adapter of the corresponding server, see Section 13.2, "Transferring Data Using RoCE" Table 13.1. RoCE Version Defaults Using RDMA_CM Client Server Default setting RoCE v1 RoCE v1 Connection RoCE v1 RoCE v2 Connection RoCE v2 RoCE v2 Connection RoCE v2 RoCE v1 No connection That RoCE v2 on the client and RoCE v1 on the server are not compatible. To resolve this issue, force both the server and client-side environment to communicate over RoCE v1. This means to force hardware that supports RoCE v2 to use RoCE v1: Procedure 13.1. Changing the Default RoCE Mode When the Hardware Is Already Running in Roce v2 Change into the /sys/kernel/config/rdma_cm directory to et the RoCE mode: Enter the ibstat command with an Ethernet network device to display the status. For example, for mlx5_0 : Create a directory for the mlx5_0 device: Display the RoCE mode in the default_roce_mode file in the tree format: Change the default RoCE mode: View the changes: | [
"~]# cd /sys/kernel/config/rdma_cm",
"~]USD ibstat mlx5_0 CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.17.1010 Hardware version: 0 Node GUID: 0x248a0703004bf0a4 System image GUID: 0x248a0703004bf0a4 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0x268a07fffe4bf0a4 Link layer: Ethernet",
"~]# mkdir mlx5_0",
"~]# cd mlx5_0",
"~]USD tree └── ports └── 1 ├── default_roce_mode └── default_roce_tos",
"~]USD cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v2",
"~]# echo \"RoCE v1\" > /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode",
"~]USD cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v1"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Tranferring_Data_Using_RoCE |
4.7.2. GFS File Attribute | 4.7.2. GFS File Attribute The gfs_tool command can be used to assign (set) a direct I/O attribute flag, directio , to a GFS file. The directio flag can also be cleared. Usage Setting the directio Flag Clearing the directio Flag File Specifies the file where the directio flag is assigned. Example In this example, the command sets the directio flag on the file named datafile in directory /gfs1 . | [
"gfs_tool setflag directio File",
"gfs_tool clearflag directio File",
"gfs_tool setflag directio /gfs1/datafile"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s2-manage-fileattribute |
Chapter 9. cert-manager Operator for Red Hat OpenShift | Chapter 9. cert-manager Operator for Red Hat OpenShift 9.1. cert-manager Operator for Red Hat OpenShift overview The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. The cert-manager Operator for Red Hat OpenShift allows you to integrate with external certificate authorities and provides certificate provisioning, renewal, and retirement. 9.1.1. About the cert-manager Operator for Red Hat OpenShift The cert-manager project introduces certificate authorities and certificates as resource types in the Kubernetes API, which makes it possible to provide certificates on demand to developers working within your cluster. The cert-manager Operator for Red Hat OpenShift provides a supported way to integrate cert-manager into your OpenShift Container Platform cluster. The cert-manager Operator for Red Hat OpenShift provides the following features: Support for integrating with external certificate authorities Tools to manage certificates Ability for developers to self-serve certificates Automatic certificate renewal Important Do not attempt to use both cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform and the community cert-manager Operator at the same time in your cluster. Also, you should not install cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform in multiple namespaces within a single OpenShift cluster. 9.1.2. cert-manager Operator for Red Hat OpenShift issuer providers The cert-manager Operator for Red Hat OpenShift has been tested with the following issuer types: Automated Certificate Management Environment (ACME) Certificate Authority (CA) Self-signed Vault Venafi Nokia NetGuard Certificate Manager (NCM) Google cloud Certificate Authority Service (Google CAS) 9.1.2.1. Testing issuer types The following table outlines the test coverage for each tested issuer type: Issuer Type Test Status Notes ACME Fully Tested Verified with standard ACME implementations. CA Fully Tested Ensures basic CA functionality. Self-signed Fully Tested Ensures basic self-signed functionality. Vault Fully Tested Limited to standard Vault setups due to infrastructure access constraints. Venafi Partially tested Subject to provider-specific limitations. NCM Partially Tested Subject to provider-specific limitations. Google CAS Partially Tested Compatible with common CA configurations. Note OpenShift Container Platform does not test all factors associated with third-party cert-manager Operator for Red Hat OpenShift provider functionality. For more information about third-party support, see the OpenShift Container Platform third-party support policy . 9.1.3. Certificate request methods There are two ways to request a certificate using the cert-manager Operator for Red Hat OpenShift: Using the cert-manager.io/CertificateRequest object With this method a service developer creates a CertificateRequest object with a valid issuerRef pointing to a configured issuer (configured by a service infrastructure administrator). A service infrastructure administrator then accepts or denies the certificate request. Only accepted certificate requests create a corresponding certificate. Using the cert-manager.io/Certificate object With this method, a service developer creates a Certificate object with a valid issuerRef and obtains a certificate from a secret that they pointed to the Certificate object. 9.1.4. Supported cert-manager Operator for Red Hat OpenShift versions For the list of supported versions of the cert-manager Operator for Red Hat OpenShift across different OpenShift Container Platform releases, see the "Platform Agnostic Operators" section in the OpenShift Container Platform update and support policy . 9.1.5. About FIPS compliance for cert-manager Operator for Red Hat OpenShift Starting with version 1.14.0, cert-manager Operator for Red Hat OpenShift is designed for FIPS compliance. When running on OpenShift Container Platform in FIPS mode, it uses the RHEL cryptographic libraries submitted to NIST for FIPS validation on the x86_64, ppc64le, and s390X architectures. For more information about the NIST validation program, see Cryptographic module validation program . For the latest NIST status for the individual versions of the RHEL cryptographic libraries submitted for validation, see Compliance activities and government standards . To enable FIPS mode, you must install cert-manager Operator for Red Hat OpenShift on an OpenShift Container Platform cluster configured to operate in FIPS mode. For more information, see "Do you need extra security for your cluster?" 9.1.6. Additional resources cert-manager project documentation Understanding compliance Installing a cluster in FIPS mode Do you need extra security for your cluster? 9.2. cert-manager Operator for Red Hat OpenShift release notes The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift . 9.2.1. cert-manager Operator for Red Hat OpenShift 1.15.1 Issued: 2025-03-13 The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.15.1: RHEA-Advisory-2733 RHEA-Advisory-2780 RHEA-Advisory-2821 RHEA-Advisory-2828 Version 1.15.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.15.5 . For more information, see the cert-manager project release notes for v1.15.5 . 9.2.1.1. New features and enhancements Integrating the cert-manager Operator for Red Hat OpenShift with Istio-CSR (Technology Preview) The cert-manager Operator for Red Hat OpenShift now supports the Istio-CSR. With this integration, cert-manager Operator's issuers can issue, sign, and renew certificates for mutual TLS (mTLS) communication. Red Hat OpenShift Service Mesh and Istio can now request these certificates directly from the cert-manager Operator. For more information, see Integrating the cert-manager Operator with Istio-CSR . 9.2.1.2. CVEs CVE-2024-9287 CVE-2024-45336 CVE-2024-45341 9.2.2. cert-manager Operator for Red Hat OpenShift 1.15.0 Issued: 2025-01-22 The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.15.0: RHEA-2025:0487 RHSA-2025:0535 RHSA-2025:0536 Version 1.15.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.15.4 . For more information, see the cert-manager project release notes for v1.15.4 . 9.2.2.1. New features and enhancements Scheduling overrides for cert-manager Operator for Red Hat OpenShift With this release, you can configure scheduling overrides for cert-manager Operator for Red Hat OpenShift, including the cert-manager controller, webhook, and CA injector. Google CAS issuer The cert-manager Operator for Red Hat OpenShift now supports the Google Certificate Authority Service (CAS) issuer. The google-cas-issuer is an external issuer for cert-manager that automates certificate lifecycle management, including issuance and renewal, with CAS-managed private certificate authorities. Note The Google CAS issuer is validated only with version 0.9.0 and cert-manager Operator for Red Hat OpenShift version 1.15.0. These versions support tasks such as issuing, renewing, and managing certificates for the API server and ingress controller in OpenShift Container Platform clusters. Default installMode updated to AllNamespaces Starting from version 1.15.0, the default and recommended Operator Lifecycle Manager (OLM) installMode is AllNamespaces . Previously, the default was SingleNamespace . This change aligns with best practices for multi-namespace Operator management. For more information, see OCPBUGS-23406 . Redundant kube-rbac-proxy sidecar removed The Operator no longer includes the redundant kube-rbac-proxy sidecar container, reducing resource usage and complexity. For more information, see CM-436 . 9.2.2.2. CVEs CVE-2024-35255 CVE-2024-28180 CVE-2024-24783 CVE-2024-6104 CVE-2023-45288 CVE-2024-45337 CVE-2024-45338 9.2.3. cert-manager Operator for Red Hat OpenShift 1.14.0 Issued: 2024-07-08 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.14.0: RHEA-2024:4360 Version 1.14.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.14.5 . For more information, see the cert-manager project release notes for v1.14.5 . 9.2.3.1. New features and enhancements FIPS compliance support With this release, FIPS mode is now automatically enabled for cert-manager Operator for Red Hat OpenShift. When installed on an OpenShift Container Platform cluster in FIPS mode, cert-manager Operator for Red Hat OpenShift ensures compatibility without affecting the cluster's FIPS support status. Securing routes with cert-manager managed certificates (Technology Preview) With this release, you can manage certificates referenced in Route resources by using the cert-manager Operator for Red Hat OpenShift. For more information, see Securing routes with the cert-manager Operator for Red Hat OpenShift . NCM issuer The cert-manager Operator for Red Hat OpenShift now supports the Nokia NetGuard Certificate Manager (NCM) issuer. The ncm-issuer is a cert-manager external issuer that integrates with the NCM PKI system using a Kubernetes controller to sign certificate requests. This integration streamlines the process of obtaining non-self-signed certificates for applications, ensuring their validity and keeping them updated. Note The NCM issuer is validated only with version 1.1.1 and the cert-manager Operator for Red Hat OpenShift version 1.14.0. This version handles tasks such as issuance, renewal, and managing certificates for the API server and ingress controller of OpenShift Container Platform clusters. 9.2.3.2. CVEs CVE-2023-45288 CVE-2024-28180 CVE-2020-8559 CVE-2024-26147 CVE-2024-24783 9.2.4. cert-manager Operator for Red Hat OpenShift 1.13.1 Issued: 2024-05-15 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.13.1: RHEA-2024:2849 Version 1.13.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.13.6 . For more information, see the cert-manager project release notes for v1.13.6 . 9.2.4.1. CVEs CVE-2023-45288 CVE-2023-48795 CVE-2024-24783 9.2.5. cert-manager Operator for Red Hat OpenShift 1.13.0 Issued: 2024-01-16 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.13.0: RHEA-2024:0259 Version 1.13.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.13.3 . For more information, see the cert-manager project release notes for v1.13.0 . 9.2.5.1. New features and enhancements You can now manage certificates for API Server and Ingress Controller by using the cert-manager Operator for Red Hat OpenShift. For more information, see Configuring certificates with an issuer . With this release, the scope of the cert-manager Operator for Red Hat OpenShift, which was previously limited to the OpenShift Container Platform on AMD64 architecture, has now been expanded to include support for managing certificates on OpenShift Container Platform running on IBM Z(R) ( s390x ), IBM Power(R) ( ppc64le ) and ARM64 architectures. With this release, you can use DNS over HTTPS (DoH) for performing the self-checks during the ACME DNS-01 challenge verification. The DNS self-check method can be controlled by using the command line flags, --dns01-recursive-nameservers-only and --dns01-recursive-nameservers . For more information, see Customizing cert-manager by overriding arguments from the cert-manager Operator API . 9.2.5.2. CVEs CVE-2023-39615 CVE-2023-3978 CVE-2023-37788 CVE-2023-29406 9.3. Installing the cert-manager Operator for Red Hat OpenShift Important The cert-manager Operator for Red Hat OpenShift version 1.15 or later supports the AllNamespaces , SingleNamespace , and OwnNamespace installation modes. Earlier versions, such as 1.14, support only the SingleNamespace and OwnNamespace installation modes. The cert-manager Operator for Red Hat OpenShift is not installed in OpenShift Container Platform by default. You can install the cert-manager Operator for Red Hat OpenShift by using the web console. 9.3.1. Installing the cert-manager Operator for Red Hat OpenShift 9.3.1.1. Installing the cert-manager Operator for Red Hat OpenShift by using the web console You can use the web console to install the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter cert-manager Operator for Red Hat OpenShift into the filter box. Select the cert-manager Operator for Red Hat OpenShift Select the cert-manager Operator for Red Hat OpenShift version from Version drop-down list, and click Install . Note See supported cert-manager Operator for Red Hat OpenShift versions in the following "Additional resources" section. On the Install Operator page: Update the Update channel , if necessary. The channel defaults to stable-v1 , which installs the latest stable release of the cert-manager Operator for Red Hat OpenShift. Choose the Installed Namespace for the Operator. The default Operator namespace is cert-manager-operator . If the cert-manager-operator namespace does not exist, it is created for you. Note During the installation, the OpenShift Container Platform web console allows you to select between AllNamespaces and SingleNamespace installation modes. For installations with cert-manager Operator for Red Hat OpenShift version 1.15.0 or later, it is recommended to choose the AllNamespaces installation mode. SingleNamespace and OwnNamespace support will remain for earlier versions but will be deprecated in future versions. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that cert-manager Operator for Red Hat OpenShift is listed with a Status of Succeeded in the cert-manager-operator namespace. Verify that cert-manager pods are up and running by entering the following command: USD oc get pods -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s You can use the cert-manager Operator for Red Hat OpenShift only after cert-manager pods are up and running. 9.3.1.2. Installing the cert-manager Operator for Red Hat OpenShift by using the CLI Prerequisites You have access to the cluster with cluster-admin privileges. Procedure Create a new project named cert-manager-operator by running the following command: USD oc new-project cert-manager-operator Create an OperatorGroup object: Create a YAML file, for example, operatorGroup.yaml , with the following content: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - "cert-manager-operator" For cert-manager Operator for Red Hat OpenShift v1.15.0 or later, create a YAML file with the following content: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {} Note Starting from cert-manager Operator for Red Hat OpenShift version 1.15.0, it is recommended to install the Operator using the AllNamespaces OLM installMode . Older versions can continue using the SingleNamespace or OwnNamespace OLM installMode . Support for SingleNamespace and OwnNamespace will be deprecated in future versions. Create the OperatorGroup object by running the following command: USD oc create -f operatorGroup.yaml Create a Subscription object: Create a YAML file, for example, subscription.yaml , that defines the Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic Create the Subscription object by running the following command: USD oc create -f subscription.yaml Verification Verify that the OLM subscription is created by running the following command: USD oc get subscription -n cert-manager-operator Example output NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1 Verify whether the Operator is successfully installed by running the following command: USD oc get csv -n cert-manager-operator Example output NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded Verify that the status cert-manager Operator for Red Hat OpenShift is Running by running the following command: USD oc get pods -n cert-manager-operator Example output NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s Verify that the status of cert-manager pods is Running by running the following command: USD oc get pods -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s Additional resources Supported cert-manager Operator for Red Hat OpenShift versions 9.3.2. Understanding update channels of the cert-manager Operator for Red Hat OpenShift Update channels are the mechanism by which you can declare the version of your cert-manager Operator for Red Hat OpenShift in your cluster. The cert-manager Operator for Red Hat OpenShift offers the following update channels: stable-v1 stable-v1.y 9.3.2.1. stable-v1 channel The stable-v1 channel is the default and suggested channel while installing the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel installs and updates the latest release version of the cert-manager Operator for Red Hat OpenShift. Select the stable-v1 channel if you want to use the latest stable release of the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1 channel. The Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version. 9.3.2.2. stable-v1.y channel The y-stream version of the cert-manager Operator for Red Hat OpenShift installs updates from the stable-v1.y channels such as stable-v1.10 , stable-v1.11 , and stable-v1.12 . Select the stable-v1.y channel if you want to use the y-stream version and stay updated to the z-stream version of the cert-manager Operator for Red Hat OpenShift. The stable-v1.y channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new z-stream version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1.y channel. OLM automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version of the z-stream releases. 9.3.3. Additional resources Adding Operators to a cluster Updating installed Operators 9.4. Configuring the egress proxy for the cert-manager Operator for Red Hat OpenShift If a cluster-wide egress proxy is configured in OpenShift Container Platform, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. OLM automatically updates all of the Operator's deployments with the HTTP_PROXY , HTTPS_PROXY , NO_PROXY environment variables. You can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. 9.4.1. Injecting a custom CA certificate for the cert-manager Operator for Red Hat OpenShift If your OpenShift Container Platform cluster has the cluster-wide proxy enabled, you can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have enabled the cluster-wide proxy for OpenShift Container Platform. Procedure Create a config map in the cert-manager namespace by running the following command: USD oc create configmap trusted-ca -n cert-manager Inject the CA bundle that is trusted by OpenShift Container Platform into the config map by running the following command: USD oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager Update the deployment for the cert-manager Operator for Red Hat OpenShift to use the config map by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}}' Verification Verify that the deployments have finished rolling out by running the following command: USD oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && \ oc rollout status deployment/cert-manager -n cert-manager && \ oc rollout status deployment/cert-manager-webhook -n cert-manager && \ oc rollout status deployment/cert-manager-cainjector -n cert-manager Example output deployment "cert-manager-operator-controller-manager" successfully rolled out deployment "cert-manager" successfully rolled out deployment "cert-manager-webhook" successfully rolled out deployment "cert-manager-cainjector" successfully rolled out Verify that the CA bundle was mounted as a volume by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'} Example output [{"mountPath":"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt","name":"trusted-ca","subPath":"ca-bundle.crt"}] Verify that the source of the CA bundle is the trusted-ca config map by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes} Example output [{"configMap":{"defaultMode":420,"name":"trusted-ca"},"name":"trusted-ca"}] 9.4.2. Additional resources Configuring proxy support in Operator Lifecycle Manager 9.5. Customizing cert-manager Operator API fields You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. Warning To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. 9.5.1. Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3 1 2 Replace <proxy_url> with the proxy server URL. 3 Replace <ignore_proxy_domains> with a comma separated list of domains. These domains are ignored by the proxy server. Save your changes and quit the text editor to apply your changes. Verification Verify that the cert-manager controller pod is redeployed by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that environment variables are updated for the cert-manager pod by running the following command: USD oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml Example output env: ... - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS> 9.5.2. Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8 1 Provide a comma-separated list of nameservers to query for the DNS-01 self check. The nameservers can be specified either as <host>:<port> , for example, 1.1.1.1:53 , or use DNS over HTTPS (DoH), for example, https://1.1.1.1/dns-query . 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53 . 4 7 8 Specify to set the log level verbosity to determine the verbosity of log messages. 5 Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402 . 6 You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. Note DNS over HTTPS (DoH) is supported starting only from cert-manager Operator for Red Hat OpenShift version 1.13.0 and later. Save your changes and quit the text editor to apply your changes. Verification Verify that arguments are updated for cert-manager pods by running the following command: USD oc get pods -n cert-manager -o yaml Example output ... metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager ... spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 ... metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager ... spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 ... metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager ... spec: containers: - args: ... - --v=4 9.5.3. Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. Warning If you uninstall the cert-manager Operator for Red Hat OpenShift or delete certificate resources from the cluster, the secret is deleted automatically. This might cause network connectivity issues depending upon where the certificate TLS secret is being used. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the Certificate object and its secret are available by running the following command: USD oc get certificate Example output NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster # ... spec: # ... controllerConfig: overrideArgs: - '--enable-certificate-owner-ref' Save your changes and quit the text editor to apply your changes. Verification Verify that the --enable-certificate-owner-ref flag is updated for cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml Example output # ... metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager # ... spec: containers: - args: - --enable-certificate-owner-ref 9.5.4. Overriding CPU and memory limits for the cert-manager components After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the deployments of the cert-manager controller, CA injector, and Webhook are available by entering the following command: USD oc get deployment -n cert-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m Before setting the CPU and memory limit, check the existing configuration for the cert-manager controller, CA injector, and Webhook by entering the following command: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3 # ... 1 2 3 The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: USD oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 " 1 Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. 2 5 You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m . 3 6 You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi . 4 Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. 7 Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. 8 11 You can specify the CPU limit that a CA injector pod can request. The default value is 10m . 9 12 You can specify the memory limit that a CA injector pod can request. The default value is 32Mi . 10 Defines the amount of CPU and memory set by scheduler for the CA injector pod. 13 Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. 14 17 You can specify the CPU limit that a Webhook pod can request. The default value is 10m . 15 18 You can specify the memory limit that a Webhook pod can request. The default value is 32Mi . 16 Defines the amount of CPU and memory set by scheduler for the Webhook pod. Example output certmanager.operator.openshift.io/cluster patched Verification Verify that the CPU and memory limits are updated for the cert-manager components: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... 9.5.5. Configuring scheduling overrides for cert-manager components You can configure the pod scheduling from the cert-manager Operator for Red Hat OpenShift API for the cert-manager Operator for Red Hat OpenShift components such as cert-manager controller, CA injector, and Webhook. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.15.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Update the certmanager.operator custom resource to configure pod scheduling overrides for the desired components by running the following command. Use the overrideScheduling field under the controllerConfig , webhookConfig , or cainjectorConfig sections to define nodeSelector and tolerations settings. USD oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule" 6 1 Defines the nodeSelector for the cert-manager controller deployment. 2 Defines the tolerations for the cert-manager controller deployment. 3 Defines the nodeSelector for the cert-manager webhook deployment. 4 Defines the tolerations for the cert-manager webhook deployment. 5 Defines the nodeSelector for the cert-manager cainjector deployment. 6 Defines the tolerations for the cert-manager cainjector deployment. Verification Verify pod scheduling settings for cert-manager pods: Check the deployments in the cert-manager namespace to confirm they have the correct nodeSelector and tolerations by running the following command: USD oc get pods -n cert-manager -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none> Check the nodeSelector and tolerations settings applied to deployments by running the following command: USD oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{.spec.template.spec.nodeSelector}{"\n"}{.spec.template.spec.tolerations}{"\n\n"}{end}' Example output cert-manager {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] cert-manager-cainjector {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] cert-manager-webhook {"kubernetes.io/os":"linux","node-role.kubernetes.io/control-plane":""} [{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","operator":"Exists"}] Verify pod scheduling events in the cert-manager namespace by running the following command: USD oc get events -n cert-manager --field-selector reason=Scheduled 9.6. Authenticating the cert-manager Operator for Red Hat OpenShift You can authenticate the cert-manager Operator for Red Hat OpenShift on the cluster by configuring the cloud credentials. 9.6.1. Authenticating on AWS Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, for example, sample-credential-request.yaml , as follows: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"aws-creds"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with AWS credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output ... spec: containers: - args: ... - mountPath: /.aws name: cloud-credentials ... volumes: ... - name: cloud-credentials secret: ... secretName: aws-creds 9.6.2. Authenticating with AWS Security Token Service Prerequisites You have extracted and prepared the ccoctl binary. You have configured an OpenShift Container Platform cluster with AWS STS by using the Cloud Credential Operator in manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request Create a CredentialsRequest resource YAML file under the credentials-request directory, such as, sample-credential-request.yaml , by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name <user_defined_name> --region=<aws_region> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir> Example output 2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, "arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds" Add the eks.amazonaws.com/role-arn="<aws_role_arn>" annotation to the service account by running the following command: USD oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn="<aws_role_arn>" To create a new pod, delete the existing cert-manager controller pod by running the following command: USD oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager The AWS credentials are applied to a new cert-manager controller pod within a minute. Verification Get the name of the updated cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that AWS credentials are updated by running the following command: USD oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list Example output # pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller # POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token Additional resources Configuring the Cloud Credential Operator utility 9.6.3. Authenticating on GCP Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, such as, sample-credential-request.yaml by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Note The dns.admin role provides admin privileges to the service account for managing Google Cloud DNS resources. To ensure that the cert-manager runs with the service account that has the least privilege, you can create a custom role with the following permissions: dns.resourceRecordSets.* dns.changes.* dns.managedZones.list Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: ... - mountPath: /.config/gcloud name: cloud-credentials .... volumes: ... - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials 9.6.4. Authenticating with GCP Workload Identity Prerequisites You extracted and prepared the ccoctl binary. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured an OpenShift Container Platform cluster with GCP Workload Identity by using the Cloud Credential Operator in a manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request In the credentials-request directory, create a YAML file that contains the following CredentialsRequest manifest: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Note The dns.admin role provides admin privileges to the service account for managing Google Cloud DNS resources. To ensure that the cert-manager runs with the service account that has the least privilege, you can create a custom role with the following permissions: dns.resourceRecordSets.* dns.changes.* dns.managedZones.list Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl gcp create-service-accounts \ --name <user_defined_name> --output-dir=<path_to_output_dir> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --workload-identity-pool <workload_identity_pool> \ --workload-identity-provider <workload_identity_provider> \ --project <gcp_project_id> Example command USD ccoctl gcp create-service-accounts \ --name abcde-20230525-4bac2781 --output-dir=/home/outputdir \ --credentials-requests-dir=/home/credentials-requests \ --workload-identity-pool abcde-20230525-4bac2781 \ --workload-identity-provider abcde-20230525-4bac2781 \ --project openshift-gcp-devel Apply the secrets generated in the manifests directory of your cluster by running the following command: USD ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP workload identity credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token ... - mountPath: /.config/gcloud name: cloud-credentials ... volumes: - name: bound-sa-token projected: ... sources: - serviceAccountToken: audience: openshift ... path: token - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials Additional resources Configuring the Cloud Credential Operator utility Manual mode with short-term credentials for components Default behavior of the Cloud Credential Operator 9.7. Configuring an ACME issuer The cert-manager Operator for Red Hat OpenShift supports using Automated Certificate Management Environment (ACME) CA servers, such as Let's Encrypt , to issue certificates. Explicit credentials are configured by specifying the secret details in the Issuer API object. Ambient credentials are extracted from the environment, metadata services, or local files which are not explicitly configured in the Issuer API object. Note The Issuer object is namespace scoped. It can only issue certificates from the same namespace. You can also use the ClusterIssuer object to issue certificates across all namespaces in the cluster. Example YAML file that defines the ClusterIssuer object apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme: ... Note By default, you can use the ClusterIssuer object with ambient credentials. To use the Issuer object with ambient credentials, you must enable the --issuer-ambient-credentials setting for the cert-manager controller. 9.7.1. About ACME issuers The ACME issuer type for the cert-manager Operator for Red Hat OpenShift represents an Automated Certificate Management Environment (ACME) certificate authority (CA) server. ACME CA servers rely on a challenge to verify that a client owns the domain names that the certificate is being requested for. If the challenge is successful, the cert-manager Operator for Red Hat OpenShift can issue the certificate. If the challenge fails, the cert-manager Operator for Red Hat OpenShift does not issue the certificate. Note Private DNS zones are not supported with Let's Encrypt and internet ACME servers. 9.7.1.1. Supported ACME challenges types The cert-manager Operator for Red Hat OpenShift supports the following challenge types for ACME issuers: HTTP-01 With the HTTP-01 challenge type, you provide a computed key at an HTTP URL endpoint in your domain. If the ACME CA server can get the key from the URL, it can validate you as the owner of the domain. For more information, see HTTP01 in the upstream cert-manager documentation. Note HTTP-01 requires that the Let's Encrypt servers can access the route of the cluster. If an internal or private cluster is behind a proxy, the HTTP-01 validations for certificate issuance fail. The HTTP-01 challenge is restricted to port 80. For more information, see HTTP-01 challenge (Let's Encrypt). DNS-01 With the DNS-01 challenge type, you provide a computed key at a DNS TXT record. If the ACME CA server can get the key by DNS lookup, it can validate you as the owner of the domain. For more information, see DNS01 in the upstream cert-manager documentation. 9.7.1.2. Supported DNS-01 providers The cert-manager Operator for Red Hat OpenShift supports the following DNS-01 providers for ACME issuers: Amazon Route 53 Azure DNS Note The cert-manager Operator for Red Hat OpenShift does not support using Microsoft Entra ID pod identities to assign a managed identity to a pod. Google Cloud DNS Webhook Red Hat tests and supports DNS providers using an external webhook with cert-manager on OpenShift Container Platform. The following DNS providers are tested and supported with OpenShift Container Platform: cert-manager-webhook-ibmcis Note Using a DNS provider that is not listed might work with OpenShift Container Platform, but the provider was not tested by Red Hat and therefore is not supported by Red Hat. 9.7.2. Configuring an ACME issuer to solve HTTP-01 challenges You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve HTTP-01 challenges. This procedure uses Let's Encrypt as the ACME CA server. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a service that you want to expose. In this procedure, the service is named sample-workload . Procedure Create an ACME cluster issuer. Create a YAML file that defines the ClusterIssuer object: Example acme-cluster-issuer.yaml file apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4 1 Provide a name for the cluster issuer. 2 Replace <secret_private_key> with the name of secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Specify the Ingress class. Optional: If you create the object without specifying ingressClassName , use the following command to patch the existing ingress: USD oc patch ingress/<ingress-name> --type=merge --patch '{"spec":{"ingressClassName":"openshift-default"}}' -n <namespace> Create the ClusterIssuer object by running the following command: USD oc create -f acme-cluster-issuer.yaml Create an Ingress to expose the service of the user workload. Create a YAML file that defines a Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1 1 Specify the namespace for the Ingress. Create the Namespace object by running the following command: USD oc create -f namespace.yaml Create a YAML file that defines the Ingress object: Example ingress.yaml file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80 1 Specify the name of the Ingress. 2 Specify the namespace that you created for the Ingress. 3 Specify the cluster issuer that you created. 4 Specify the Ingress class. 5 Replace <hostname> with the Subject Alternative Name (SAN) to be associated with the certificate. This name is used to add DNS names to the certificate. 6 Specify the secret that stores the certificate. 7 Replace <hostname> with the hostname. You can use the <host_name>.<cluster_ingress_domain> syntax to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. For example, you might use apps.<cluster_base_domain> . Otherwise, you must ensure that a DNS record exists for the chosen hostname. 8 Specify the name of the service to expose. This example uses a service named sample-workload . Create the Ingress object by running the following command: USD oc create -f ingress.yaml 9.7.3. Configuring an ACME issuer by using explicit credentials for AWS Route53 You can use cert-manager Operator for Red Hat OpenShift to set up an Automated Certificate Management Environment (ACME) issuer to solve DNS-01 challenges by using explicit credentials on AWS. This procedure uses Let's Encrypt as the ACME certificate authority (CA) server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites You must provide the explicit accessKeyID and secretAccessKey credentials. For more information, see Route53 in the upstream cert-manager documentation. Note You can use Amazon Route 53 with explicit credentials in an OpenShift Container Platform cluster that is not running on AWS. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Create a secret to store your AWS credentials in by running the following command: USD oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \ 1 -n my-issuer-namespace 1 Replace <aws_secret_access_key> with your AWS secret access key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: "aws-secret" 9 key: "awsSecretAccessKey" 10 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <aws_key_id> with your AWS key ID. 7 Replace <hosted_zone_id> with your hosted zone ID. 8 Replace <region_name> with the AWS region name. For example, us-east-1 . 9 Specify the name of the secret you created. 10 Specify the key in the secret you created that stores your AWS secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.4. Configuring an ACME issuer by using ambient credentials on AWS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on AWS. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites If your cluster is configured to use the AWS Security Token Service (STS), you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster section. If your cluster does not use the AWS STS, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <hosted_zone_id> with your hosted zone ID. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.5. Configuring an ACME issuer by using explicit credentials for GCP Cloud DNS You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites You have set up Google Cloud service account with a desired role for Google CloudDNS. For more information, see Google CloudDNS in the upstream cert-manager documentation. Note You can use Google CloudDNS with explicit credentials in an OpenShift Container Platform cluster that is not running on GCP. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your GCP credentials by running the following command: USD oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <project_id> with the name of the GCP project that contains the Cloud DNS zone. 6 Specify the name of the secret you created. 7 Specify the key in the secret you created that stores your GCP secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.6. Configuring an ACME issuer by using ambient credentials on GCP You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites If your cluster is configured to use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity section. If your cluster does not use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4 1 Provide a name for the issuer. 2 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <gcp_project_id> with the name of the GCP project that contains the Cloud DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.7. Configuring an ACME issuer by using explicit credentials for Microsoft Azure DNS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on Microsoft Azure. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Azure DNS. Prerequisites You have set up a service principal with desired role for Azure DNS. For more information, see Azure DNS in the upstream cert-manager documentation. Note You can follow this procedure for an OpenShift Container Platform cluster that is not running on Microsoft Azure. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your Azure credentials in by running the following command: USD oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \ 1 2 3 -n my-issuer-namespace 1 Replace <secret_name> with your secret name. 2 Replace <azure_secret_access_key_name> with your Azure secret access key name. 3 Replace <azure_secret_access_key_value> with your Azure secret key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <azure_client_id> with your Azure client ID. 6 Replace <secret_name> with a name of the client secret. 7 Replace <azure_secret_access_key_name> with the client secret key name. 8 Replace <azure_subscription_id> with your Azure subscription ID. 9 Replace <azure_tenant_id> with your Azure tenant ID. 10 Replace <azure_dns_zone_resource_group> with the name of the Azure DNS zone resource group. 11 Replace <azure_dns_zone> with the name of Azure DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 9.7.8. Additional resources Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP 9.8. Configuring certificates with an issuer By using the cert-manager Operator for Red Hat OpenShift, you can manage certificates, handling tasks such as renewal and issuance, for workloads within the cluster, as well as components interacting externally to the cluster. 9.8.1. Creating certificates for user workloads Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - "<domain_name>" 5 issuerRef: name: <issuer_name> 6 kind: Issuer 1 Provide a name for the certificate. 2 Specify the namespace of the issuer. 3 Specify the common name (CN). 4 Specify the name of the secret to create that contains the certificate. 5 Specify the domain name. 6 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n <issuer_namespace> Once certificate is in Ready status, workloads on your cluster can start using the generated certificate secret. 9.8.2. Creating certificates for the API server Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.13.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: "api.<cluster_base_domain>" 2 secretName: <secret_name> 3 dnsNames: - "api.<cluster_base_domain>" 4 issuerRef: name: <issuer_name> 5 kind: Issuer 1 Provide a name for the certificate. 2 Specify the common name (CN). 3 Specify the name of the secret to create that contains the certificate. 4 Specify the DNS name of the API server. 5 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Add the API server named certificate. For more information, see "Adding an API server named certificate" section in the "Additional resources" section. Note To ensure the certificates are updated, run the oc login command again after the certificate is created. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n openshift-config Once certificate is in Ready status, API server on your cluster can start using the generated certificate secret. 9.8.3. Creating certificates for the Ingress Controller Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.13.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: "apps.<cluster_base_domain>" 2 secretName: <secret_name> 3 dnsNames: - "apps.<cluster_base_domain>" 4 - "*.apps.<cluster_base_domain>" 5 issuerRef: name: <issuer_name> 6 kind: Issuer 1 Provide a name for the certificate. 2 Specify the common name (CN). 3 Specify the name of the secret to create that contains the certificate. 4 5 Specify the DNS name of the ingress. 6 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Replace the default ingress certificate. For more information, see "Replacing the default ingress certificate" section in the "Additional resources" section. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n openshift-ingress Once certificate is in Ready status, Ingress Controller on your cluster can start using the generated certificate secret. 9.8.4. Additional resources Configuring an issuer Supported issuer types Configuring an ACME issuer Adding an API server named certificate Replacing the default ingress certificate 9.9. Securing routes with the cert-manager Operator for Red Hat OpenShift In the OpenShift Container Platform, the route API is extended to provide a configurable option to reference TLS certificates via secrets. With the Creating a route with externally managed certificate Technology Preview feature enabled, you can minimize errors from manual intervention, streamline the certificate management process, and enable the OpenShift Container Platform router to promptly serve the referenced certificate. Important Securing routes with the cert-manager Operator for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.9.1. Configuring certificates to secure routes in your cluster The following steps demonstrate the process of utilizing the cert-manager Operator for Red Hat OpenShift with the Let's Encrypt ACME HTTP-01 challenge type to secure the route resources in your OpenShift Container Platform cluster. Prerequisites You have installed version 1.14.0 or later of the cert-manager Operator for Red Hat OpenShift. You have enabled the RouteExternalCertificate feature gate. You have the create and update permissions on the routes/custom-host sub-resource. You have a Service resource that you want to expose. Procedure Create a Route resource for your Service resource using edge TLS termination and a custom hostname by running the following command. The hostname will be used while creating a Certificate resource in the following steps. USD oc create route edge <route_name> \ 1 --service=<service_name> \ 2 --hostname=<hostname> \ 3 --namespace=<namespace> 4 1 Specify your route's name. 2 Specify the service you want to expose. 3 Specify the hostname of your route. 4 Specify the namespace where your route is located. Create an Issuer to configure the HTTP-01 solver by running the following command. For other ACME issuer types, see "Configuring ACME an issuer". Example Issuer.yaml file USD oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-acme namespace: <namespace> 1 spec: acme: server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-acme-account-key solvers: - http01: ingress: ingressClassName: openshift-default EOF 1 Specify the namespace where the Issuer is located. It should be the same as your route's namespace. Create a Certificate object for the route by running the following command. The secretName specifies the TLS secret that is going to be issued and managed by cert-manager and will also be referenced in your route in the following steps. USD oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-route-cert namespace: <namespace> 1 spec: commonName: <hostname> 2 dnsNames: - <hostname> 3 usages: - server auth issuerRef: kind: Issuer name: letsencrypt-acme secretName: <secret_name> 4 EOF 1 Specify the namespace where the Certificate resource is located. It should be the same as your route's namespace. 2 Specify the certificate's common name using the hostname of the route. 3 Add the hostname of your route to the certificate's DNS names. 4 Specify the name of the secret that contains the certificate. Create a Role to provide the router service account permissions to read the referenced secret by using the following command: USD oc create role secret-reader \ --verb=get,list,watch \ --resource=secrets \ --resource-name=<secret_name> \ 1 --namespace=<namespace> 2 1 Specify the name of the secret that you want to grant access to. It should be consistent with your secretName specified in the Certificate resource. 2 Specify the namespace where both your secret and route are located. Create a RoleBinding resource to bind the router service account with the newly created Role resource by using the following command: USD oc create rolebinding secret-reader-binding \ --role=secret-reader \ --serviceaccount=openshift-ingress:router \ --namespace=<namespace> 1 1 Specify the namespace where both your secret and route are located. Update your route's .spec.tls.externalCertificate field to reference the previously created secret and use the certificate issued by cert-manager by using the following command: USD oc patch route <route_name> \ 1 -n <namespace> \ 2 --type=merge \ -p '{"spec":{"tls":{"externalCertificate":{"name":"<secret_name>"}}}}' 3 1 Specify the route name. 2 Specify the namespace where both your secret and route are located. 3 Specify the name of the secret that contains the certificate. Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -n <namespace> 1 USD oc get secret -n <namespace> 2 1 2 Specify the namespace where both your secret and route reside. Verify that the router is using the referenced external certificate by running the following command. The command should return with the status code 200 OK . USD curl -IsS https://<hostname> 1 1 Specify the hostname of your route. Verify the server certificate's subject , subjectAltName and issuer are all as expected from the curl verbose outputs by running the following command: USD curl -v https://<hostname> 1 1 Specify the hostname of your route. The route is now successfully secured by the certificate from the referenced secret issued by cert-manager. cert-manager will automatically manage the certificate's lifecycle. 9.9.2. Additional resources Creating a route with externally managed certificate Configuring an ACME issuer 9.10. Integrating the cert-manager Operator for Red Hat OpenShift with Istio-CSR Important Istio-CSR integration for cert-manager Operator for Red Hat OpenShift is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The cert-manager Operator for Red Hat OpenShift provides enhanced support for securing workloads and control plane components in Red Hat OpenShift Service Mesh or Istio. This includes support for certificates enabling mutual TLS (mTLS), which are signed, delivered, and renewed using cert-manager issuers. You can secure Istio workloads and control plane components by using the cert-manager Operator for Red Hat OpenShift managed Istio-CSR agent. With this Istio-CSR integration, Istio can now obtain certificates from the cert-manager Operator for Red Hat OpenShift, simplifying security and certificate management. 9.10.1. Installing the Istio-CSR agent through cert-manager Operator for Red Hat OpenShift 9.10.1.1. Enabling the Istio-CSR feature Use this procedure to enable the Istio-CSR feature in cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Update the deployment for the cert-manager Operator for Red Hat OpenShift to use the config map by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"UNSUPPORTED_ADDON_FEATURES","value":"IstioCSR=true"}]}}}' Verification Verify that the deployments have finished rolling out by running the following command: USD oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator Example output deployment "cert-manager-operator-controller-manager" successfully rolled out 9.10.1.2. Creating a root CA issuer for the Istio-CSR agent Use this procedure to create the root CA issuer for Istio-CSR agent. Note Other supported issuers can be used, except for the ACME issuer, which is not supported. For more information, see "cert-manager Operator for Red Hat OpenShift issuer providers". Create a YAML file, for example, issuer.yaml , that defines the Issuer and Certificate objects: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca 1 3 Specify the Issuer or ClusterIssuer . 2 4 Specify the name of the Istio project. Verification Verify that the Issuer is created and ready to use by running the following command: USD oc get issuer istio-ca -n <istio_project_name> Example output NAME READY AGE istio-ca True 3m Additional resources cert-manager Operator for Red Hat OpenShift issuer providers 9.10.1.3. Creating the IstioCSR custom resource Use this procedure to install the Istio-CSR agent through cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have enabled the Istio-CSR feature. You have created the Issuer or ClusterIssuer resources required for generating certificates for the Istio-CSR agent. Note If you are using Issuer resource, create the Issuer and Certificate resources in the Red Hat OpenShift Service Mesh or Istiod namespace. Certificate requests are generated in the same namespace, and role-based access control (RBAC) is configured accordingly. Procedure Create a new project for installing Istio-CSR by running the following command. You can use an existing project and skip this step. USD oc new-project <istio_csr_project_name> Create the IstioCSR custom resource to enable Istio-CSR agent managed by the cert-manager Operator for Red Hat OpenShift for processing Istio workload and control plane certificate signing requests. Note Only one IstioCSR custom resource (CR) is supported at a time. If multiple IstioCSR CRs are created, only one will be active. Use the status sub-resource of IstioCSR to check if a resource is unprocessed. If multiple IstioCSR CRs are created simultaneously, none will be processed. If multiple IstioCSR CRs are created sequentially, only the first one will be processed. To prevent new requests from being rejected, delete any unprocessed IstioCSR CRs. The Operator does not automatically remove objects created for IstioCSR . If an active IstioCSR resource is deleted and a new one is created in a different namespace without removing the deployments, multiple istio-csr deployments may remain active. This behavior is not recommended and is not supported. Create a YAML file, for example, istiocsr.yaml , that defines the IstioCSR object: Example IstioCSR.yaml file apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system 1 Specify the Issuer or ClusterIssuer name. It should be the same name as the CA issuer defined in the issuer.yaml file. 2 Specify the Issuer or ClusterIssuer kind. It should be the same kind as the CA issuer defined in the issuer.yaml file. Create the IstioCSR custom resource by running the following command: USD oc create -f IstioCSR.yaml Verification Verify that the Istio-CSR deployment is ready by running the following command: USD oc get deployment -n <istio_csr_project_name> Example output NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s Verify that the Istio-CSR pods are running by running the following command: USD oc get pod -n <istio_csr_project_name> Example output NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s Verify that the Istio-CSR pod is not reporting any errors in the logs by running the following command: USD oc -n <istio_csr_project_name> logs <istio_csr_pod_name> Verify that the cert-manager Operator for Red Hat OpenShift pod is not reporting any errors by running the following command: USD oc -n cert-manager-operator logs <cert_manager_operator_pod_name> 9.10.2. Uninstalling the Istio-CSR agent managed by cert-manager Operator for Red Hat OpenShift Use this procedure to uninstall the Istio-CSR agent managed by cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have enabled the Istio-CSR feature. You have created the IstioCSR custom resource. Procedure Remove the IstioCSR custom resource by running the following command: USD oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default Remove related resources: Important To avoid disrupting any Red Hat OpenShift Service Mesh or Istio components, ensure that no component is referencing the Istio-CSR service or the certificates issued for Istio before removing the following resources. List the cluster scoped-resources by running the following command and save the names of the listed resources for later reference: USD oc get clusterrolebindings,clusterroles -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" List the resources in Istio-csr deployed namespace by running the following command and save the names of the listed resources for later reference: USD oc get certificate,deployments,services,serviceaccounts -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name> List the resources in Red Hat OpenShift Service Mesh or Istio deployed namespaces by running the following command and save the names of the listed resources for later reference: USD oc get roles,rolebindings -l "app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr" -n <istio_csr_project_name> For each resource listed in steps, delete the resource by running the following command: USD oc -n <istio_csr_project_name> delete <resource_type>/<resource_name> Repeat this process until all of the related resources have been deleted. 9.10.3. Upgrading the cert-manager Operator for Red Hat OpenShift with Istio-CSR feature enabled When the Istio-CSR TechPreview feature gate is enabled, the Operator cannot be upgraded. To use to the available version, you must uninstall the cert-manager Operator for Red Hat OpenShift and remove all Istio-CSR resources before reinstalling it. 9.11. Monitoring cert-manager Operator for Red Hat OpenShift You can expose controller metrics for the cert-manager Operator for Red Hat OpenShift in the format provided by the Prometheus Operator. 9.11.1. Enabling monitoring by using a service monitor for the cert-manager Operator for Red Hat OpenShift You can enable monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift by using a service monitor to perform the custom metrics scraping. Prerequisites You have access to the cluster with cluster-admin privileges. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Add the label to enable cluster monitoring by running the following command: USD oc label namespace cert-manager openshift.io/cluster-monitoring=true Create a service monitor: Create a YAML file that defines the Role , RoleBinding , and ServiceMonitor objects: Example monitoring.yaml file apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager Create the Role , RoleBinding , and ServiceMonitor objects by running the following command: USD oc create -f monitoring.yaml Additional resources Setting up metrics collection for user-defined projects 9.11.2. Querying metrics for the cert-manager Operator for Red Hat OpenShift After you have enabled monitoring for the cert-manager Operator for Red Hat OpenShift, you can query its metrics by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the cert-manager Operator for Red Hat OpenShift. You have enabled monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift. Procedure From the OpenShift Container Platform web console, navigate to Observe Metrics . Add a query by using one of the following formats: Specify the endpoints: {instance="<endpoint>"} 1 1 Replace <endpoint> with the value of the endpoint for the cert-manager service. You can find the endpoint value by running the following command: oc describe service cert-manager -n cert-manager . Specify the tcp-prometheus-servicemonitor port: {endpoint="tcp-prometheus-servicemonitor"} 9.12. Configuring log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift To troubleshoot issues with the cert-manager components and the cert-manager Operator for Red Hat OpenShift, you can configure the log level verbosity. Note To use different log levels for different cert-manager components, see Customizing cert-manager Operator API fields . 9.12.1. Setting a log level for cert-manager You can set a log level for cert-manager to determine the verbosity of log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager.operator cluster Set the log level value by editing the spec.logLevel section: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager ... spec: logLevel: <log_level> 1 1 The valid log level values for the CertManager resource are Normal , Debug , Trace , and TraceAll . To audit logs and perform common operations when there are no issues, set logLevel to Normal . To troubleshoot a minor issue by viewing verbose logs, set logLevel to Debug . To troubleshoot a major issue by viewing more verbose logs, you can set logLevel to Trace . To troubleshoot serious issues, set logLevel to TraceAll . The default logLevel is Normal . Note TraceAll generates huge amount of logs. After setting logLevel to TraceAll , you might experience performance issues. Save your changes and quit the text editor to apply your changes. After applying the changes, the verbosity level for the cert-manager components controller, CA injector, and webhook is updated. 9.12.2. Setting a log level for the cert-manager Operator for Red Hat OpenShift You can set a log level for the cert-manager Operator for Red Hat OpenShift to determine the verbosity of the operator log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Update the subscription object for cert-manager Operator for Red Hat OpenShift to provide the verbosity level for the operator logs by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"OPERATOR_LOG_LEVEL","value":"v"}]}}}' 1 1 Replace v with the desired log level number. The valid values for v can range from 1`to `10 . The default value is 2 . Verification The cert-manager Operator pod is redeployed. Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the following command: USD oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container Example output # deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 # deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9 Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the oc logs command: USD oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator 9.12.3. Additional resources Customizing cert-manager Operator API fields 9.13. Uninstalling the cert-manager Operator for Red Hat OpenShift You can remove the cert-manager Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 9.13.1. Uninstalling the cert-manager Operator for Red Hat OpenShift You can uninstall the cert-manager Operator for Red Hat OpenShift by using the web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the cert-manager Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the cert-manager Operator for Red Hat OpenShift entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 9.13.2. Removing cert-manager Operator for Red Hat OpenShift resources Once you have uninstalled the cert-manager Operator for Red Hat OpenShift, you have the option to eliminate its associated resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove the deployments of the cert-manager components, such as cert-manager , cainjector , and webhook , present in the cert-manager namespace. Click the Project drop-down menu to see a list of all available projects, and select the cert-manager project. Navigate to Workloads Deployments . Select the deployment that you want to delete. Click the Actions drop-down menu, and select Delete Deployment to see a confirmation dialog box. Click Delete to delete the deployment. Alternatively, delete deployments of the cert-manager components such as cert-manager , cainjector and webhook present in the cert-manager namespace by using the command-line interface (CLI). USD oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager Optional: Remove the custom resource definitions (CRDs) that were installed by the cert-manager Operator for Red Hat OpenShift: Navigate to Administration CustomResourceDefinitions . Enter certmanager in the Name field to filter the CRDs. Click the Options menu to each of the following CRDs, and select Delete Custom Resource Definition : Certificate CertificateRequest CertManager ( operator.openshift.io ) Challenge ClusterIssuer Issuer Order Optional: Remove the cert-manager-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the cert-manager-operator and select Delete Namespace . In the confirmation dialog, enter cert-manager-operator in the field and click Delete . | [
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s",
"oc new-project cert-manager-operator",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - \"cert-manager-operator\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {}",
"oc create -f operatorGroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic",
"oc create -f subscription.yaml",
"oc get subscription -n cert-manager-operator",
"NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1",
"oc get csv -n cert-manager-operator",
"NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded",
"oc get pods -n cert-manager-operator",
"NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s",
"oc get pods -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s",
"oc create configmap trusted-ca -n cert-manager",
"oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}",
"[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]",
"oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}",
"[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml",
"env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8",
"oc get pods -n cert-manager -o yaml",
"metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4",
"oc get certificate",
"NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml",
"metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref",
"oc get deployment -n cert-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"",
"certmanager.operator.openshift.io/cluster patched",
"oc get deployment -n cert-manager -o yaml",
"metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi",
"oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule\" 6",
"oc get pods -n cert-manager -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none>",
"oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{.spec.template.spec.nodeSelector}{\"\\n\"}{.spec.template.spec.tolerations}{\"\\n\\n\"}{end}'",
"cert-manager {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-cainjector {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-webhook {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}]",
"oc get events -n cert-manager --field-selector reason=Scheduled",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>",
"2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds",
"oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s",
"oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list",
"pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"oc create -f sample-credential-request.yaml",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"mkdir credentials-request",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager",
"ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>",
"ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel",
"ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'",
"oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager",
"NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s",
"oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml",
"spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:",
"apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4",
"oc patch ingress/<ingress-name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}' -n <namespace>",
"oc create -f acme-cluster-issuer.yaml",
"apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1",
"oc create -f namespace.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80",
"oc create -f ingress.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project <issuer_namespace>",
"oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4",
"oc create -f issuer.yaml",
"oc edit certmanager cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3",
"oc new-project my-issuer-namespace",
"oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud",
"oc create -f issuer.yaml",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n <issuer_namespace>",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: \"api.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"api.<cluster_base_domain>\" 4 issuerRef: name: <issuer_name> 5 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-config",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: \"apps.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"apps.<cluster_base_domain>\" 4 - \"*.apps.<cluster_base_domain>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer",
"oc create -f certificate.yaml",
"oc get certificate -w -n openshift-ingress",
"oc create route edge <route_name> \\ 1 --service=<service_name> \\ 2 --hostname=<hostname> \\ 3 --namespace=<namespace> 4",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-acme namespace: <namespace> 1 spec: acme: server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-acme-account-key solvers: - http01: ingress: ingressClassName: openshift-default EOF",
"oc create -f - << EOF apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: example-route-cert namespace: <namespace> 1 spec: commonName: <hostname> 2 dnsNames: - <hostname> 3 usages: - server auth issuerRef: kind: Issuer name: letsencrypt-acme secretName: <secret_name> 4 EOF",
"oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret_name> \\ 1 --namespace=<namespace> 2",
"oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<namespace> 1",
"oc patch route <route_name> \\ 1 -n <namespace> \\ 2 --type=merge -p '{\"spec\":{\"tls\":{\"externalCertificate\":{\"name\":\"<secret_name>\"}}}}' 3",
"oc get certificate -n <namespace> 1 oc get secret -n <namespace> 2",
"curl -IsS https://<hostname> 1",
"curl -v https://<hostname> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"UNSUPPORTED_ADDON_FEATURES\",\"value\":\"IstioCSR=true\"}]}}}'",
"oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator",
"deployment \"cert-manager-operator-controller-manager\" successfully rolled out",
"apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca",
"oc get issuer istio-ca -n <istio_project_name>",
"NAME READY AGE istio-ca True 3m",
"oc new-project <istio_csr_project_name>",
"apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system",
"oc create -f IstioCSR.yaml",
"oc get deployment -n <istio_csr_project_name>",
"NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s",
"oc get pod -n <istio_csr_project_name>",
"NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s",
"oc -n <istio_csr_project_name> logs <istio_csr_pod_name>",
"oc -n cert-manager-operator logs <cert_manager_operator_pod_name>",
"oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default",
"oc get clusterrolebindings,clusterroles -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\"",
"oc get certificate,deployments,services,serviceaccounts -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc get roles,rolebindings -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>",
"oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>",
"oc label namespace cert-manager openshift.io/cluster-monitoring=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager",
"oc create -f monitoring.yaml",
"{instance=\"<endpoint>\"} 1",
"{endpoint=\"tcp-prometheus-servicemonitor\"}",
"oc edit certmanager.operator cluster",
"apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: <log_level> 1",
"oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1",
"oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container",
"deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9",
"oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator",
"oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift |
Chapter 1. OpenShift Container Platform registry overview | Chapter 1. OpenShift Container Platform registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the internal image registry. 1.1. Integrated OpenShift Container Platform registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams. Additional resources Image Registry Operator in OpenShift Container Platform 1.2. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift Container Platform registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon imagestream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.2.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.2.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Login by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.3. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced registry features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.4. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All imagestreams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the OpenShift namespace so that the imagestreams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/registry/registry-overview |
Appendix D. Ceph File System mirrors configuration reference | Appendix D. Ceph File System mirrors configuration reference This section lists configuration options for Ceph File System (CephFS) mirrors. cephfs_mirror_max_concurrent_directory_syncs Description Maximum number of directory snapshots that can be synchronized concurrently by cephfs-mirror daemon. Controls the number of synchronization threads. Type Integer Default 3 Min 1 cephfs_mirror_action_update_interval Description Interval in seconds to process pending mirror update actions. Type secs Default 2 Min 1 cephfs_mirror_restart_mirror_on_blocklist_interval Description Interval in seconds to restart blocklisted mirror instances. Setting to zero ( 0 ) disables restarting blocklisted instances. Type secs Default 30 Min 0 cephfs_mirror_max_snapshot_sync_per_cycle Description Maximum number of snapshots to mirror when a directory is picked up for mirroring by worker threads. Type Integer Default 3 Min 1 cephfs_mirror_directory_scan_interval Description Interval in seconds to scan configured directories for snapshot mirroring. Type Integer Default 10 Min 1 cephfs_mirror_max_consecutive_failures_per_directory Description Number of consecutive snapshot synchronization failures to mark a directory as "failed". Failed directories are retried for synchronization less frequently. Type Integer Default 10 Min 0 cephfs_mirror_retry_failed_directories_interval Description Interval in seconds to retry synchronization for failed directories. Type Integer Default 60 Min 1 cephfs_mirror_restart_mirror_on_failure_interval Description Interval in seconds to restart failed mirror instances. Setting to zero ( 0 ) disables restarting failed mirror instances. Type secs Default 20 Min 0 cephfs_mirror_mount_timeout Description Timeout in seconds for mounting primary or secondary CephFS by the cephfs-mirror daemon. Setting this to a higher value could result in the mirror daemon getting stalled when mounting a file system if the cluster is not reachable. This option is used to override the usual client_mount_timeout . Type secs Default 10 Min 0 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/ceph-file-system-mirrors-configuration-reference_fs |
2.4. Web Interface | 2.4. Web Interface 2.4.1. Browser Initialization This section explains browser initialization for Firefox to access PKI services. Importing a CA Certificate Click Menu Preferences Privacy & Security View certificates . Select the Authorities tab and click the Import button. Select the ca.crt file and click Import . Importing a Client Certificate Click Options Preferences Privacy & Security View certificates . Select the Your Certificates tab. Click on Import and select the client p12 file, such as ca_admin_cert.p12 . Enter the password for the client certificate on the prompt. Click OK . Verify that an entry is added under Your Certificates . Accessing the Web Console You can access the PKI services by opening https:// host_name : port in your browser. 2.4.2. The Administrative Interfaces The all subsystems use HTML-based administrative interface. It is accessed by entering the host name and secure port as the URL, authenticating with the administrator's certificate, and clicking the appropriate Administrators link. Note There is a single TLS port for all subsystems which is used for both administrator and agent services. Access to those services is restricted by certificate-based authentication. The HTML admin interface is much more limited than the Java console; the primary administrative function is managing the subsystem users. The TPS only allows operations to manage users for the TPS subsystem. However, the TPS admin page can also list tokens and display all activities (including normally-hidden administrative actions) performed on the TPS. Figure 2.2. TPS Admin Page 2.4.3. Agent Interfaces The agent services pages are where almost all of the certificate and token management tasks are performed. These services are HTML-based, and agents authenticate to the site using a special agent certificate. Figure 2.3. Certificate Manager's Agent Services Page The operations vary depending on the subsystem: The Certificate Manager agent services include approving certificate requests (which issues the certificates), revoking certificates, and publishing certificates and CRLs. All certificates issued by the CA can be managed through its agent services page. The TPS agent services, like the CA agent services, manages all of the tokens which have been formatted and have had certificates issued to them through the TPS. Tokens can be enrolled, suspended, and deleted by agents. Two other roles (operator and admin) can view tokens in web services pages, but cannot perform any actions on the tokens. KRA agent services pages process key recovery requests, which set whether to allow a certificate to be issued reusing an existing key pair if the certificate is lost. The OCSP agent services page allows agents to configure CAs which publish CRLs to the OCSP, to load CRLs to the OCSP manually, and to view the state of client OCSP requests. The TKS is the only subsystem without an agent services page. 2.4.4. End User Pages The CA and TPS both process direct user requests in some way. That means that end users have to have a way to connect with those subsystems. The CA has end-user, or end-entities , HTML services. The TPS uses the Enterprise Security Client. The end-user services are accessed over standard HTTP using the server's host name and the standard port number; they can also be accessed over HTTPS using the server's host name and the specific end-entities TLS port. For CAs, each type of TLS certificate is processed through a specific online submission form, called a profile . There are about two dozen certificate profiles for the CA, covering all sorts of certificates - user TLS certificates, server TLS certificates, log and file signing certificates, email certificates, and every kind of subsystem certificate. There can also be custom profiles. Figure 2.4. Certificate Manager's End-Entities Page End users retrieve their certificates through the CA pages when the certificates are issued. They can also download CA chains and CRLs and can revoke or renew their certificates through those pages. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/web-interface |
probe::signal.systkill.return | probe::signal.systkill.return Name probe::signal.systkill.return - Sending kill signal to a thread completed Synopsis Values retstr The return value to either __group_send_sig_info, name Name of the probe point | [
"signal.systkill.return"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-systkill-return |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/red_hat_openshift_data_foundation_architecture/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 1. Limits and scalability | Chapter 1. Limits and scalability This document details the tested cluster maximums for OpenShift Dedicated clusters, along with information about the test environment and configuration used to test the maximums. Information about control plane and infrastructure node sizing and scaling is also provided. 1.1. Cluster maximums Consider the following tested object maximums when you plan a OpenShift Dedicated cluster installation. The table specifies the maximum limits for each tested type in a OpenShift Dedicated cluster. These guidelines are based on a cluster of 249 compute (also known as worker) nodes in a multiple availability zone configuration. For smaller clusters, the maximums are lower. Table 1.1. Tested cluster maximums Maximum type 4.x tested maximum Number of pods [1] 25,000 Number of pods per node 250 Number of pods per core There is no default value Number of namespaces [2] 5,000 Number of pods per namespace [3] 25,000 Number of services [4] 10,000 Number of services per namespace 5,000 Number of back ends per service 5,000 Number of deployments per namespace [3] 2,000 The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application. When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. Each service port and each service back end has a corresponding entry in iptables . The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system. 1.2. OpenShift Container Platform testing environment and configuration The following table lists the OpenShift Container Platform environment and configuration on which the cluster maximums are tested for the AWS cloud platform. Node Type vCPU RAM(GiB) Disk type Disk size(GiB)/IOPS Count Region Control plane/etcd [1] m5.4xlarge 16 64 gp3 350 / 1,000 3 us-west-2 Infrastructure nodes [2] r5.2xlarge 8 64 gp3 300 / 900 3 us-west-2 Workload [3] m5.2xlarge 8 32 gp3 350 / 900 3 us-west-2 Compute nodes m5.2xlarge 8 32 gp3 350 / 900 102 us-west-2 io1 disks are used for control plane/etcd nodes in all versions prior to 4.10. Infrastructure nodes are used to host monitoring components because Prometheus can claim a large amount of memory, depending on usage patterns. Workload nodes are dedicated to run performance and scalability workload generators. Larger cluster sizes and higher object counts might be reachable. However, the sizing of the infrastructure nodes limits the amount of memory that is available to Prometheus. When creating, modifying, or deleting objects, Prometheus stores the metrics in its memory for roughly 3 hours prior to persisting the metrics on disk. If the rate of creation, modification, or deletion of objects is too high, Prometheus can become overwhelmed and fail due to the lack of memory resources. 1.3. Control plane and infrastructure node sizing and scaling When you install a OpenShift Dedicated cluster, the sizing of the control plane and infrastructure nodes are automatically determined by the compute node count. If you change the number of compute nodes in your cluster after installation, the Red Hat Site Reliability Engineering (SRE) team scales the control plane and infrastructure nodes as required to maintain cluster stability. 1.3.1. Node sizing during installation During the installation process, the sizing of the control plane and infrastructure nodes are dynamically calculated. The sizing calculation is based on the number of compute nodes in a cluster. The following tables list the control plane and infrastructure node sizing that is applied during installation. AWS control plane and infrastructure node size: Number of compute nodes Control plane size Infrastructure node size 1 to 25 m5.2xlarge r5.xlarge 26 to 100 m5.4xlarge r5.2xlarge 101 to 249 m5.8xlarge r5.4xlarge GCP control plane and infrastructure node size: Number of compute nodes Control plane size Infrastructure node size 1 to 25 custom-8-32768 custom-4-32768-ext 26 to 100 custom-16-65536 custom-8-65536-ext 101 to 249 custom-32-131072 custom-16-131072-ext GCP control plane and infrastructure node size for clusters created on or after 21 June 2024: Number of compute nodes Control plane size Infrastructure node size 1 to 25 n2-standard-8 n2-highmem-4 26 to 100 n2-standard-16 n2-highmem-8 101 to 249 n2-standard-32 n2-highmem-16 Note The maximum number of compute nodes on OpenShift Dedicated clusters version 4.14.14 and later is 249. For earlier versions, the limit is 180. 1.3.2. Node scaling after installation If you change the number of compute nodes after installation, the control plane and infrastructure nodes are scaled by the Red Hat Site Reliability Engineering (SRE) team as required. The nodes are scaled to maintain platform stability. Postinstallation scaling requirements for control plane and infrastructure nodes are assessed on a case-by-case basis. Node resource consumption and received alerts are taken into consideration. Rules for control plane node resizing alerts The resizing alert is triggered for the control plane nodes in a cluster when the following occurs: Control plane nodes sustain over 66% utilization on average in a cluster. Note The maximum number of compute nodes on OpenShift Dedicated is 180. Rules for infrastructure node resizing alerts Resizing alerts are triggered for the infrastructure nodes in a cluster when it has high-sustained CPU or memory utilization. This high-sustained utilization status is: Infrastructure nodes sustain over 50% utilization on average in a cluster with a single availability zone using 2 infrastructure nodes. Infrastructure nodes sustain over 66% utilization on average in a cluster with multiple availability zones using 3 infrastructure nodes. Note The maximum number of compute nodes on OpenShift Dedicated cluster versions 4.14.14 and later is 249. For earlier versions, the limit is 180. The resizing alerts only appear after sustained periods of high utilization. Short usage spikes, such as a node temporarily going down causing the other node to scale up, do not trigger these alerts. The SRE team might scale the control plane and infrastructure nodes for additional reasons, for example to manage an increase in resource consumption on the nodes. 1.3.3. Sizing considerations for larger clusters For larger clusters, infrastructure node sizing can become a significant impacting factor to scalability. There are many factors that influence the stated thresholds, including the etcd version or storage data format. Exceeding these limits does not necessarily mean that the cluster will fail. In most cases, exceeding these numbers results in lower overall performance. | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/planning_your_environment/osd-limits-scalability |
Chapter 2. Using Hot Rod JS clients | Chapter 2. Using Hot Rod JS clients Take a look at some examples for using the Hot Rod JS client with Data Grid. 2.1. Hot Rod JS client examples After you install and configure your Hot Rod JS client, start using it by trying out some basic cache operations before moving on to more complex interactions with Data Grid. 2.1.1. Hello world Create a cache named "myCache" on Data Grid Server then add and retrieve an entry. var infinispan = require('infinispan'); // Connect to Data Grid Server. // Use an existing cache named "myCache". var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', clientIntelligence: 'BASIC', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { console.log('Connected to `myCache`'); // Add an entry to the cache. var clientPut = client.put('hello', 'world'); // Retrieve the entry you added. var clientGet = clientPut.then( function() { return client.get('hello'); }); // Print the value of the entry. var showGet = clientGet.then( function(value) { console.log('get(hello)=' + value); }); // Disconnect from Data Grid Server. return client.disconnect(); }).catch(function(error) { // Log any errors. console.log("Got error: " + error.message); }); 2.1.2. Working with entries and retrieving cache statistics Add, retrieve, remove single entries and view statistics for the cache. var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { var clientPut = client.put('key', 'value'); var clientGet = clientPut.then( function() { return client.get('key'); }); var showGet = clientGet.then( function(value) { console.log('get(key)=' + value); }); var clientRemove = showGet.then( function() { return client.remove('key'); }); var showRemove = clientRemove.then( function(success) { console.log('remove(key)=' + success); }); var clientStats = showRemove.then( function() { return client.stats(); }); var showStats = clientStats.then( function(stats) { console.log('Number of stores: ' + stats.stores); console.log('Number of cache hits: ' + stats.hits); console.log('All statistics: ' + JSON.stringify(stats, null, " ")); }); return showStats.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); 2.1.3. Working with multiple cache entries Create multiple cache entries with simple recursive loops. var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { var data = [ {key: 'multi1', value: 'v1'}, {key: 'multi2', value: 'v2'}, {key: 'multi3', value: 'v3'}]; var clientPutAll = client.putAll(data); var clientGetAll = clientPutAll.then( function() { return client.getAll(['multi2', 'multi3']); }); var showGetAll = clientGetAll.then( function(entries) { console.log('getAll(multi2, multi3)=%s', JSON.stringify(entries)); } ); var clientIterator = showGetAll.then( function() { return client.iterator(1); }); var showIterated = clientIterator.then( function(it) { function loop(promise, fn) { // Simple recursive loop over the iterator's () call. return promise.then(fn).then(function (entry) { return entry.done ? it.close().then(function () { return entry.value; }) : loop(it.(), fn); }); } return loop(it.(), function (entry) { console.log('iterator.()=' + JSON.stringify(entry)); return entry; }); } ); var clientClear = showIterated.then( function() { return client.clear(); }); return clientClear.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); 2.1.4. Using Async and Await constructs Node.js provides async and await constructs that can simplify cache operations. Single cache entries const infinispan = require("infinispan"); const log4js = require('log4js'); log4js.configure('example-log4js.json'); async function test() { await new Promise((resolve, reject) => setTimeout(() => resolve(), 1000)); console.log('Hello, World!'); let client = await infinispan.client({port: 11222, host: '127.0.0.1'}); console.log(`Connected to Infinispan dashboard data`); await client.put('key', 'value'); let value = await client.get('key'); console.log('get(key)=' + value); let success = await client.remove('key'); console.log('remove(key)=' + success); let stats = await client.stats(); console.log('Number of stores: ' + stats.stores); console.log('Number of cache hits: ' + stats.hits); console.log('All statistics: ' + JSON.stringify(stats, null, " ")); await client.disconnect(); } test(); Multiple cache entries const infinispan = require("infinispan"); const log4js = require('log4js'); log4js.configure('example-log4js.json'); async function test() { let client = await infinispan.client({port: 11222, host: '127.0.0.1'}); console.log(`Connected to Infinispan dashboard data`); let data = [ {key: 'multi1', value: 'v1'}, {key: 'multi2', value: 'v2'}, {key: 'multi3', value: 'v3'}]; await client.putAll(data); let entries = await client.getAll(['multi2', 'multi3']); console.log('getAll(multi2, multi3)=%s', JSON.stringify(entries)); let iterator = await client.iterator(1); let entry = {done: true}; do { entry = await iterator.(); console.log('iterator.()=' + JSON.stringify(entry)); } while (!entry.done); await iterator.close(); await client.clear(); await client.disconnect(); } test(); 2.1.5. Running server-side scripts You can add custom scripts to Data Grid Server and then run them from Hot Rod JS clients. Sample script // mode=local,language=javascript,parameters=[k, v],datatype='text/plain; charset=utf-8' cache.put(k, v); cache.get(k); Script execution var infinispan = require('infinispan'); var readFile = Promise.denodeify(require('fs').readFile); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var addScriptFile = readFile('sample-script.js').then( function(file) { return client.addScript('sample-script', file.toString()); }); var clientExecute = addScriptFile.then( function() { return client.execute('sample-script', {k: 'exec-key', v: 'exec-value'}); }); var showExecute = clientExecute.then( function(ret) { console.log('Script execution returned: ' + ret); }); return showExecute.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); 2.1.6. Registering event listeners Event listeners notify Hot Rod JS clients when cache updates occur, including when entries are created, modified, removed, or expired. Note Events for entry creation and modification notify clients about keys and values. Events for entry removal and expiration notify clients about keys only. Event listener registration var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientAddListenerCreate = client.addListener('create', onCreate); var clientAddListeners = clientAddListenerCreate.then( function(listenerId) { // Associate multiple callbacks with a single client-side listener. // To do this, register listeners with the same listener ID. var clientAddListenerModify = client.addListener('modify', onModify, {listenerId: listenerId}); var clientAddListenerRemove = client.addListener('remove', onRemove, {listenerId: listenerId}); return Promise.all([clientAddListenerModify, clientAddListenerRemove]); }); var clientCreate = clientAddListeners.then( function() { return client.putIfAbsent('eventful', 'v0'); }); var clientModify = clientCreate.then( function() { return client.replace('eventful', 'v1'); }); var clientRemove = clientModify.then( function() { return client.remove('eventful'); }); var clientRemoveListener = Promise.all([clientAddListenerCreate, clientRemove]).then( function(values) { var listenerId = values[0]; return client.removeListener(listenerId); }); return clientRemoveListener.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); function onCreate(key, version) { console.log('[Event] Created key: ' + key + ' with version: ' + JSON.stringify(version)); } function onModify(key, version) { console.log('[Event] Modified key: ' + key + ', version after update: ' + JSON.stringify(version)); } function onRemove(key) { console.log('[Event] Removed key: ' + key); } You can tune notifications from event listeners to avoid unnecessary roundtrips with the key-value-with--converter-factory converter. This allows you to, for example, find out values associated with keys within the event instead of retrieving them afterwards. Remote event converter var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} , { dataFormat : { keyType: 'application/json', valueType: 'application/json' } } ); connected.then(function (client) { // Include the remote event converter to avoid unnecessary roundtrips. var opts = { converterFactory : { name: "key-value-with--converter-factory" } }; var clientAddListenerCreate = client.addListener('create', logEvent("Created"), opts); var clientAddListeners = clientAddListenerCreate.then( function(listenerId) { // Associate multiple callbacks with a single client-side listener. // To do this, register listeners with the same listener ID. var clientAddListenerModify = client.addListener('modify', logEvent("Modified"), {opts, listenerId: listenerId}); var clientAddListenerRemove = client.addListener('remove', logEvent("Removed"), {opts, listenerId: listenerId}); return Promise.all([clientAddListenerModify, clientAddListenerRemove]); }); var clientCreate = clientAddListeners.then( function() { return client.putIfAbsent('converted', 'v0'); }); var clientModify = clientCreate.then( function() { return client.replace('converted', 'v1'); }); var clientRemove = clientModify.then( function() { return client.remove('converted'); }); var clientRemoveListener = Promise.all([clientAddListenerCreate, clientRemove]).then( function(values) { var listenerId = values[0]; return client.removeListener(listenerId); }); return clientRemoveListener.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); function logEvent(prefix) { return function(event) { console.log(prefix + " key: " + event.key); console.log(prefix + " value: " + event.value); console.log(prefix + " value: " + event.prev); } } Tip You can add custom converters to Data Grid Server. See the Data Grid documentation for information. 2.1.7. Using conditional operations The Hot Rod protocol stores metadata about values in Data Grid. This metadata provides a deterministic factor that lets you perform cache operations for certain conditions. For example, if you want to replace the value of a key if the versions do not match. Use the getWithMetadata method to retrieve metadata associated with the value for a key. var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientPut = client.putIfAbsent('cond', 'v0'); var showPut = clientPut.then( function(success) { console.log(':putIfAbsent(cond)=' + success); }); var clientReplace = showPut.then( function() { return client.replace('cond', 'v1'); } ); var showReplace = clientReplace.then( function(success) { console.log('replace(cond)=' + success); }); var clientGetMetaForReplace = showReplace.then( function() { return client.getWithMetadata('cond'); }); // Call the getWithMetadata method to retrieve the value and its metadata. var clientReplaceWithVersion = clientGetMetaForReplace.then( function(entry) { console.log('getWithMetadata(cond)=' + JSON.stringify(entry)); return client.replaceWithVersion('cond', 'v2', entry.version); } ); var showReplaceWithVersion = clientReplaceWithVersion.then( function(success) { console.log('replaceWithVersion(cond)=' + success); }); var clientGetMetaForRemove = showReplaceWithVersion.then( function() { return client.getWithMetadata('cond'); }); var clientRemoveWithVersion = clientGetMetaForRemove.then( function(entry) { console.log('getWithMetadata(cond)=' + JSON.stringify(entry)); return client.removeWithVersion('cond', entry.version); } ); var showRemoveWithVersion = clientRemoveWithVersion.then( function(success) { console.log('removeWithVersion(cond)=' + success)}); return showRemoveWithVersion.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); 2.1.8. Working with ephemeral data Use the getWithMetadata and size methods expire cache entries. var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientPutExpiry = client.put('expiry', 'value', {lifespan: '1s'}); var clientGetMetaAndSize = clientPutExpiry.then( function() { // Compute getWithMetadata and size in parallel. return Promise.all([client.getWithMetadata('expiry'), client.size()]); }); var showGetMetaAndSize = clientGetMetaAndSize.then( function(values) { console.log('Before expiration:'); console.log('getWithMetadata(expiry)=' + JSON.stringify(values[0])); console.log('size=' + values[1]); }); var clientContainsAndSize = showGetMetaAndSize.then( function() { sleepFor(1100); // Sleep to force expiration. return Promise.all([client.containsKey('expiry'), client.size()]); }); var showContainsAndSize = clientContainsAndSize.then( function(values) { console.log('After expiration:'); console.log('containsKey(expiry)=' + values[0]); console.log('size=' + values[1]); }); return showContainsAndSize.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log("Got error: " + error.message); }); function sleepFor(sleepDuration){ var now = new Date().getTime(); while(new Date().getTime() < now + sleepDuration){ /* Do nothing. */ } } 2.1.9. Working with queries Use the query method to perform queries on your caches. You must configure Hot Rod JS client to have application/x-protostream data format for values in your caches. const infinispan = require('infinispan'); const protobuf = require('protobufjs'); // This example uses async/await paradigma (async function () { // User data protobuf definition const cacheValueProtoDef = `package awesomepackage; /** * @TypeId(1000044) */ message AwesomeUser { required string name = 1; required int64 age = 2; required bool isVerified =3; }` try { // Creating clients for two caches: // - ___protobuf_metadata for registering .proto file // - queryCache for user data const connectProp = { port: 11222, host: '127.0.0.1' }; const commonOpts = { version: '3.0', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'admin', password: 'pass' } }; const protoMetaClientOps = { cacheName: '___protobuf_metadata', dataFormat: { keyType: "text/plain", valueType: "text/plain" } } const clientOps = { dataFormat: { keyType: "text/plain", valueType: "application/x-protostream" }, cacheName: 'queryCache' } var protoMetaClient = await infinispan.client(connectProp, Object.assign(commonOpts, protoMetaClientOps)); var client = await infinispan.client(connectProp, Object.assign(commonOpts, clientOps)); // Registering protobuf definition on server await protoMetaClient.put("awesomepackage/AwesomeUser.proto", cacheValueProtoDef); // Registering protobuf definition on protobufjs const root = protobuf.parse(cacheValueProtoDef).root; const AwesomeUser = root.lookupType(".awesomepackage.AwesomeUser"); client.registerProtostreamRoot(root); client.registerProtostreamType(".awesomepackage.AwesomeUser", 1000044); // Cleanup and populating the cache await client.clear(); for (let i = 0; i < 10; i++) { const payload = { name: "AwesomeName" + i, age: i, isVerified: (Math.random() < 0.5) }; const message = AwesomeUser.create(payload); console.log("Creating entry:", message); await client.put(i.toString(), message) } // Run the query const queryStr = `select u.name,u.age from awesomepackage.AwesomeUser u where u.age<20 order by u.name asc`; console.log("Running query:", queryStr); const query = await client.query({ queryString: queryStr }); console.log("Query result:"); console.log(query); } catch (err) { handleError(err); } finally { if (client) { await client.disconnect(); } if (protoMetaClient) { await protoMetaClient.disconnect(); } } })(); function handleError(err) { if (err.message.includes("'queryCache' not found")) { console.log('*** ERROR ***'); console.log(`*** This example needs a cache 'queryCache' with the following config: { "local-cache": { "statistics": true, "encoding": { "key": { "media-type": "text/plain" }, "value": { "media-type": "application/x-protostream" }}}}`) } else { console.log(err); } } See Querying Data Grid caches for more information. | [
"var infinispan = require('infinispan'); // Connect to Data Grid Server. // Use an existing cache named \"myCache\". var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', clientIntelligence: 'BASIC', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { console.log('Connected to `myCache`'); // Add an entry to the cache. var clientPut = client.put('hello', 'world'); // Retrieve the entry you added. var clientGet = clientPut.then( function() { return client.get('hello'); }); // Print the value of the entry. var showGet = clientGet.then( function(value) { console.log('get(hello)=' + value); }); // Disconnect from Data Grid Server. return client.disconnect(); }).catch(function(error) { // Log any errors. console.log(\"Got error: \" + error.message); });",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { var clientPut = client.put('key', 'value'); var clientGet = clientPut.then( function() { return client.get('key'); }); var showGet = clientGet.then( function(value) { console.log('get(key)=' + value); }); var clientRemove = showGet.then( function() { return client.remove('key'); }); var showRemove = clientRemove.then( function(success) { console.log('remove(key)=' + success); }); var clientStats = showRemove.then( function() { return client.stats(); }); var showStats = clientStats.then( function(stats) { console.log('Number of stores: ' + stats.stores); console.log('Number of cache hits: ' + stats.hits); console.log('All statistics: ' + JSON.stringify(stats, null, \" \")); }); return showStats.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); });",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'}, { cacheName: 'myCache', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'username', password: 'changeme' } } ); connected.then(function (client) { var data = [ {key: 'multi1', value: 'v1'}, {key: 'multi2', value: 'v2'}, {key: 'multi3', value: 'v3'}]; var clientPutAll = client.putAll(data); var clientGetAll = clientPutAll.then( function() { return client.getAll(['multi2', 'multi3']); }); var showGetAll = clientGetAll.then( function(entries) { console.log('getAll(multi2, multi3)=%s', JSON.stringify(entries)); } ); var clientIterator = showGetAll.then( function() { return client.iterator(1); }); var showIterated = clientIterator.then( function(it) { function loop(promise, fn) { // Simple recursive loop over the iterator's next() call. return promise.then(fn).then(function (entry) { return entry.done ? it.close().then(function () { return entry.value; }) : loop(it.next(), fn); }); } return loop(it.next(), function (entry) { console.log('iterator.next()=' + JSON.stringify(entry)); return entry; }); } ); var clientClear = showIterated.then( function() { return client.clear(); }); return clientClear.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); });",
"const infinispan = require(\"infinispan\"); const log4js = require('log4js'); log4js.configure('example-log4js.json'); async function test() { await new Promise((resolve, reject) => setTimeout(() => resolve(), 1000)); console.log('Hello, World!'); let client = await infinispan.client({port: 11222, host: '127.0.0.1'}); console.log(`Connected to Infinispan dashboard data`); await client.put('key', 'value'); let value = await client.get('key'); console.log('get(key)=' + value); let success = await client.remove('key'); console.log('remove(key)=' + success); let stats = await client.stats(); console.log('Number of stores: ' + stats.stores); console.log('Number of cache hits: ' + stats.hits); console.log('All statistics: ' + JSON.stringify(stats, null, \" \")); await client.disconnect(); } test();",
"const infinispan = require(\"infinispan\"); const log4js = require('log4js'); log4js.configure('example-log4js.json'); async function test() { let client = await infinispan.client({port: 11222, host: '127.0.0.1'}); console.log(`Connected to Infinispan dashboard data`); let data = [ {key: 'multi1', value: 'v1'}, {key: 'multi2', value: 'v2'}, {key: 'multi3', value: 'v3'}]; await client.putAll(data); let entries = await client.getAll(['multi2', 'multi3']); console.log('getAll(multi2, multi3)=%s', JSON.stringify(entries)); let iterator = await client.iterator(1); let entry = {done: true}; do { entry = await iterator.next(); console.log('iterator.next()=' + JSON.stringify(entry)); } while (!entry.done); await iterator.close(); await client.clear(); await client.disconnect(); } test();",
"// mode=local,language=javascript,parameters=[k, v],datatype='text/plain; charset=utf-8' cache.put(k, v); cache.get(k);",
"var infinispan = require('infinispan'); var readFile = Promise.denodeify(require('fs').readFile); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var addScriptFile = readFile('sample-script.js').then( function(file) { return client.addScript('sample-script', file.toString()); }); var clientExecute = addScriptFile.then( function() { return client.execute('sample-script', {k: 'exec-key', v: 'exec-value'}); }); var showExecute = clientExecute.then( function(ret) { console.log('Script execution returned: ' + ret); }); return showExecute.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); });",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientAddListenerCreate = client.addListener('create', onCreate); var clientAddListeners = clientAddListenerCreate.then( function(listenerId) { // Associate multiple callbacks with a single client-side listener. // To do this, register listeners with the same listener ID. var clientAddListenerModify = client.addListener('modify', onModify, {listenerId: listenerId}); var clientAddListenerRemove = client.addListener('remove', onRemove, {listenerId: listenerId}); return Promise.all([clientAddListenerModify, clientAddListenerRemove]); }); var clientCreate = clientAddListeners.then( function() { return client.putIfAbsent('eventful', 'v0'); }); var clientModify = clientCreate.then( function() { return client.replace('eventful', 'v1'); }); var clientRemove = clientModify.then( function() { return client.remove('eventful'); }); var clientRemoveListener = Promise.all([clientAddListenerCreate, clientRemove]).then( function(values) { var listenerId = values[0]; return client.removeListener(listenerId); }); return clientRemoveListener.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); }); function onCreate(key, version) { console.log('[Event] Created key: ' + key + ' with version: ' + JSON.stringify(version)); } function onModify(key, version) { console.log('[Event] Modified key: ' + key + ', version after update: ' + JSON.stringify(version)); } function onRemove(key) { console.log('[Event] Removed key: ' + key); }",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} , { dataFormat : { keyType: 'application/json', valueType: 'application/json' } } ); connected.then(function (client) { // Include the remote event converter to avoid unnecessary roundtrips. var opts = { converterFactory : { name: \"key-value-with-previous-converter-factory\" } }; var clientAddListenerCreate = client.addListener('create', logEvent(\"Created\"), opts); var clientAddListeners = clientAddListenerCreate.then( function(listenerId) { // Associate multiple callbacks with a single client-side listener. // To do this, register listeners with the same listener ID. var clientAddListenerModify = client.addListener('modify', logEvent(\"Modified\"), {opts, listenerId: listenerId}); var clientAddListenerRemove = client.addListener('remove', logEvent(\"Removed\"), {opts, listenerId: listenerId}); return Promise.all([clientAddListenerModify, clientAddListenerRemove]); }); var clientCreate = clientAddListeners.then( function() { return client.putIfAbsent('converted', 'v0'); }); var clientModify = clientCreate.then( function() { return client.replace('converted', 'v1'); }); var clientRemove = clientModify.then( function() { return client.remove('converted'); }); var clientRemoveListener = Promise.all([clientAddListenerCreate, clientRemove]).then( function(values) { var listenerId = values[0]; return client.removeListener(listenerId); }); return clientRemoveListener.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); }); function logEvent(prefix) { return function(event) { console.log(prefix + \" key: \" + event.key); console.log(prefix + \" value: \" + event.value); console.log(prefix + \" previous value: \" + event.prev); } }",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientPut = client.putIfAbsent('cond', 'v0'); var showPut = clientPut.then( function(success) { console.log(':putIfAbsent(cond)=' + success); }); var clientReplace = showPut.then( function() { return client.replace('cond', 'v1'); } ); var showReplace = clientReplace.then( function(success) { console.log('replace(cond)=' + success); }); var clientGetMetaForReplace = showReplace.then( function() { return client.getWithMetadata('cond'); }); // Call the getWithMetadata method to retrieve the value and its metadata. var clientReplaceWithVersion = clientGetMetaForReplace.then( function(entry) { console.log('getWithMetadata(cond)=' + JSON.stringify(entry)); return client.replaceWithVersion('cond', 'v2', entry.version); } ); var showReplaceWithVersion = clientReplaceWithVersion.then( function(success) { console.log('replaceWithVersion(cond)=' + success); }); var clientGetMetaForRemove = showReplaceWithVersion.then( function() { return client.getWithMetadata('cond'); }); var clientRemoveWithVersion = clientGetMetaForRemove.then( function(entry) { console.log('getWithMetadata(cond)=' + JSON.stringify(entry)); return client.removeWithVersion('cond', entry.version); } ); var showRemoveWithVersion = clientRemoveWithVersion.then( function(success) { console.log('removeWithVersion(cond)=' + success)}); return showRemoveWithVersion.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); });",
"var infinispan = require('infinispan'); var connected = infinispan.client( {port: 11222, host: '127.0.0.1'} { // Configure client connections with authentication and encryption here. } ); connected.then(function (client) { var clientPutExpiry = client.put('expiry', 'value', {lifespan: '1s'}); var clientGetMetaAndSize = clientPutExpiry.then( function() { // Compute getWithMetadata and size in parallel. return Promise.all([client.getWithMetadata('expiry'), client.size()]); }); var showGetMetaAndSize = clientGetMetaAndSize.then( function(values) { console.log('Before expiration:'); console.log('getWithMetadata(expiry)=' + JSON.stringify(values[0])); console.log('size=' + values[1]); }); var clientContainsAndSize = showGetMetaAndSize.then( function() { sleepFor(1100); // Sleep to force expiration. return Promise.all([client.containsKey('expiry'), client.size()]); }); var showContainsAndSize = clientContainsAndSize.then( function(values) { console.log('After expiration:'); console.log('containsKey(expiry)=' + values[0]); console.log('size=' + values[1]); }); return showContainsAndSize.finally( function() { return client.disconnect(); }); }).catch(function(error) { console.log(\"Got error: \" + error.message); }); function sleepFor(sleepDuration){ var now = new Date().getTime(); while(new Date().getTime() < now + sleepDuration){ /* Do nothing. */ } }",
"const infinispan = require('infinispan'); const protobuf = require('protobufjs'); // This example uses async/await paradigma (async function () { // User data protobuf definition const cacheValueProtoDef = `package awesomepackage; /** * @TypeId(1000044) */ message AwesomeUser { required string name = 1; required int64 age = 2; required bool isVerified =3; }` try { // Creating clients for two caches: // - ___protobuf_metadata for registering .proto file // - queryCache for user data const connectProp = { port: 11222, host: '127.0.0.1' }; const commonOpts = { version: '3.0', authentication: { enabled: true, saslMechanism: 'DIGEST-MD5', userName: 'admin', password: 'pass' } }; const protoMetaClientOps = { cacheName: '___protobuf_metadata', dataFormat: { keyType: \"text/plain\", valueType: \"text/plain\" } } const clientOps = { dataFormat: { keyType: \"text/plain\", valueType: \"application/x-protostream\" }, cacheName: 'queryCache' } var protoMetaClient = await infinispan.client(connectProp, Object.assign(commonOpts, protoMetaClientOps)); var client = await infinispan.client(connectProp, Object.assign(commonOpts, clientOps)); // Registering protobuf definition on server await protoMetaClient.put(\"awesomepackage/AwesomeUser.proto\", cacheValueProtoDef); // Registering protobuf definition on protobufjs const root = protobuf.parse(cacheValueProtoDef).root; const AwesomeUser = root.lookupType(\".awesomepackage.AwesomeUser\"); client.registerProtostreamRoot(root); client.registerProtostreamType(\".awesomepackage.AwesomeUser\", 1000044); // Cleanup and populating the cache await client.clear(); for (let i = 0; i < 10; i++) { const payload = { name: \"AwesomeName\" + i, age: i, isVerified: (Math.random() < 0.5) }; const message = AwesomeUser.create(payload); console.log(\"Creating entry:\", message); await client.put(i.toString(), message) } // Run the query const queryStr = `select u.name,u.age from awesomepackage.AwesomeUser u where u.age<20 order by u.name asc`; console.log(\"Running query:\", queryStr); const query = await client.query({ queryString: queryStr }); console.log(\"Query result:\"); console.log(query); } catch (err) { handleError(err); } finally { if (client) { await client.disconnect(); } if (protoMetaClient) { await protoMetaClient.disconnect(); } } })(); function handleError(err) { if (err.message.includes(\"'queryCache' not found\")) { console.log('*** ERROR ***'); console.log(`*** This example needs a cache 'queryCache' with the following config: { \"local-cache\": { \"statistics\": true, \"encoding\": { \"key\": { \"media-type\": \"text/plain\" }, \"value\": { \"media-type\": \"application/x-protostream\" }}}}`) } else { console.log(err); } }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_node.js_client_guide/client-usage-examples |
Chapter 11. Nodes | Chapter 11. Nodes 11.1. Node maintenance Nodes can be placed into maintenance mode by using the oc adm utility or NodeMaintenance custom resources (CRs). Note The node-maintenance-operator (NMO) is no longer shipped with OpenShift Virtualization. It is deployed as a standalone Operator from the OperatorHub in the OpenShift Container Platform web console or by using the OpenShift CLI ( oc ). For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. Important Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 11.1.1. Eviction strategies Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it. You can configure eviction strategies for virtual machines (VMs) or for the cluster. VM eviction strategy The VM LiveMigrate eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node. You can configure eviction strategies for virtual machines (VMs) by using the web console or the command line . Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Cluster eviction strategy You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade. Important Configuring a cluster eviction strategy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 11.1. Cluster eviction strategies Eviction strategy Description Interrupts workflow Blocks upgrades LiveMigrate 1 Prioritizes workload continuity over upgrades. No Yes 2 LiveMigrateIfPossible Prioritizes upgrades over workload continuity to ensure that the environment is updated. Yes No None 3 Shuts down VMs with no eviction strategy. Yes No Default eviction strategy for multi-node clusters. If a VM blocks an upgrade, you must shut down the VM manually. Default eviction strategy for single-node OpenShift. 11.1.1.1. Configuring a VM eviction strategy using the command line You can configure an eviction strategy for a virtual machine (VM) by using the command line. Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example eviction strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1 # ... 1 Specify the eviction strategy. The default value is LiveMigrate . Restart the VM to apply the changes: USD virtctl restart <vm_name> -n <namespace> 11.1.1.2. Configuring a cluster eviction strategy by using the command line You can configure an eviction strategy for a cluster by using the command line. Important Configuring a cluster eviction strategy is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Edit the hyperconverged resource by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the cluster eviction strategy as shown in the following example: Example cluster eviction strategy apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate # ... 11.1.2. Run strategies A virtual machine (VM) configured with spec.running: true is immediately restarted. The spec.runStrategy key provides greater flexibility for determining how a VM behaves under certain conditions. Important The spec.runStrategy and spec.running keys are mutually exclusive. Only one of them can be used. A VM configuration with both keys is invalid. 11.1.2.1. Run strategies The spec.runStrategy key has four possible values: Always The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. This is the same behavior as running: true . RerunOnFailure The VMI is re-created on another node if the instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down. Manual You control the VMI state manually with the start , stop , and restart virtctl client commands. The VM is not automatically restarted. Halted No VMI is present when a VM is created. This is the same behavior as running: false . Different combinations of the virtctl start , stop and restart commands affect the run strategy. The following table describes a VM's transition between states. The first column shows the VM's initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run. Table 11.2. Run strategy before and after virtctl commands Initial run strategy Start Stop Restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with runStrategy: Always or runStrategy: RerunOnFailure are rescheduled on a new node. 11.1.2.2. Configuring a VM run strategy by using the command line You can configure a run strategy for a virtual machine (VM) by using the command line. Important The spec.runStrategy and spec.running keys are mutually exclusive. A VM configuration that contains values for both keys is invalid. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example run strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ... 11.1.3. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 11.1.4. Additional resources About live migration 11.2. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node. 11.2.1. About node labeling for obsolete CPU models The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: Example 11.1. Obsolete CPU models This predefined list is not visible in the HyperConverged CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels field of the HyperConverged CR. 11.2.2. About node labeling for CPU features Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example: An environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , each base CPU feature for Penryn is compared to the list of CPU features supported by Haswell . Example 11.2. CPU features supported by Penryn Example 11.3. CPU features supported by Haswell If both Penryn and Haswell support a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only by Haswell and not by Penryn . Example 11.4. Node labels created for CPU features after iteration 11.2.3. Configuring obsolete CPU models You can configure a list of obsolete CPU models by editing the HyperConverged custom resource (CR). Procedure Edit the HyperConverged custom resource, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>" 2 1 Replace the example values in the cpuModels array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. 2 Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value, Penryn is used by default. 11.3. Preventing node reconciliation Use skip-node annotation to prevent the node-labeller from reconciling a node. 11.3.1. Using skip-node annotation If you want the node-labeller to skip a node, annotate that node by using the oc CLI. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Annotate the node that you want to skip by running the following command: USD oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1 1 Replace <node_name> with the name of the relevant node to skip. Reconciliation resumes on the cycle after the node annotation is removed or set to false. 11.3.2. Additional resources Managing node labeling for obsolete CPU models 11.4. Deleting a failed node to trigger virtual machine failover If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with runStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. Note If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks, the following events occur: Failed nodes are automatically recycled. Virtual machines with runStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes. 11.4.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has runStrategy set to Always . You have installed the OpenShift CLI ( oc ). 11.4.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 11.4.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 11.4.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A | [
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1",
"virtctl restart <vm_name> -n <namespace>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate",
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always",
"\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64",
"apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc",
"aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave",
"aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2",
"oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis -A"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/nodes |
Part II. Learn | Part II. Learn | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/learn |
Chapter 7. Displaying and Raising the Domain Level | Chapter 7. Displaying and Raising the Domain Level The domain level indicates what operations and capabilities are available in the IdM topology. Domain level 1 Examples of available functionality: simplified ipa-replica-install (see Section 4.5, "Creating the Replica: Introduction" ) enhanced topology management (see Chapter 6, Managing Replication Topology ) Important Domain level 1 was introduced in Red Hat Enterprise Linux 7.3 with IdM version 4.4. To use the domain level 1 features, all your replicas must be running Red Hat Enterprise Linux 7.3 or later. If your first server was installed with Red Hat Enterprise Linux 7.3, the domain level for your domain is automatically set to 1. If you upgrade all servers to IdM version 4.4 from earlier versions, the domain level is not raised automatically. If you want to use domain level 1 features, raise the domain level manually, as described in Section 7.2, "Raising the Domain Level" . Domain level 0 Examples of available functionality: ipa-replica-install requires a more complicated process of creating a replica information file on the initial server and copying it to the replica (see Section D.2, "Creating Replicas" ) more complicated and error-prone topology management using ipa-replica-manage and ipa-csreplica-manage (see Section D.3, "Managing Replicas and Replication Agreements" ) 7.1. Displaying the Current Domain Level Command Line: Displaying the Current Domain Level Log in as the administrator: Run the ipa domainlevel-get command: Web UI: Displaying the Current Domain Level Select IPA Server Topology Domain Level . | [
"kinit admin",
"ipa domainlevel-get ----------------------- Current domain level: 0 -----------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/domain-level |
Chapter 3. Cloning Virtual Machines | Chapter 3. Cloning Virtual Machines There are two types of guest virtual machine instances used in creating guest copies: Clones are instances of a single virtual machine. Clones can be used to set up a network of identical virtual machines, and they can also be distributed to other destinations. Templates are instances of a virtual machine that are designed to be used as a source for cloning. You can create multiple clones from a template and make minor modifications to each clone. This is useful in seeing the effects of these changes on the system. Both clones and templates are virtual machine instances. The difference between them is in how they are used. For the created clone to work properly, information and configurations unique to the virtual machine that is being cloned usually has to be removed before cloning. The information that needs to be removed differs, based on how the clones will be used. The information and configurations to be removed may be on any of the following levels: Platform level information and configurations include anything assigned to the virtual machine by the virtualization solution. Examples include the number of Network Interface Cards (NICs) and their MAC addresses. Guest operating system level information and configurations include anything configured within the virtual machine. Examples include SSH keys. Application level information and configurations include anything configured by an application installed on the virtual machine. Examples include activation codes and registration information. Note This chapter does not include information about removing the application level, because the information and approach is specific to each application. As a result, some of the information and configurations must be removed from within the virtual machine, while other information and configurations must be removed from the virtual machine using the virtualization environment (for example, Virtual Machine Manager or VMware). 3.1. Preparing Virtual Machines for Cloning Before cloning a virtual machine, it must be prepared by running the virt-sysprep utility on its disk image, or by using the following steps: Procedure 3.1. Preparing a virtual machine for cloning Setup the virtual machine Build the virtual machine that is to be used for the clone or template. Install any software needed on the clone. Configure any non-unique settings for the operating system. Configure any non-unique application settings. Remove the network configuration Remove any persistent udev rules using the following command: Note If udev rules are not removed, the name of the first NIC may be eth1 instead of eth0. Remove unique network details from ifcfg scripts by making the following edits to /etc/sysconfig/network-scripts/ifcfg-eth[x] : Remove the HWADDR and Static lines Note If the HWADDR does not match the new guest's MAC address, the ifcfg will be ignored. Therefore, it is important to remove the HWADDR from the file. Ensure that a DHCP configuration remains that does not include a HWADDR or any unique information. Ensure that the file includes the following lines: If the following files exist, ensure that they contain the same content: /etc/sysconfig/networking/devices/ifcfg-eth[x] /etc/sysconfig/networking/profiles/default/ifcfg-eth[x] Note If NetworkManager or any special settings were used with the virtual machine, ensure that any additional unique information is removed from the ifcfg scripts. Remove registration details Remove registration details using one of the following: For Red Hat Network (RHN) registered guest virtual machines, run the following command: For Red Hat Subscription Manager (RHSM) registered guest virtual machines: If the original virtual machine will not be used, run the following commands: If the original virtual machine will be used, run only the following command: Note The original RHSM profile remains in the portal. Removing other unique details Remove any sshd public/private key pairs using the following command: Note Removing ssh keys prevents problems with ssh clients not trusting these hosts. Remove any other application-specific identifiers or configurations that may cause conflicts if running on multiple machines. Configure the virtual machine to run configuration wizards on the boot Configure the virtual machine to run the relevant configuration wizards the time it is booted by doing one of the following: For Red Hat Enterprise Linux 6 and below, create an empty file on the root file system called .unconfigured using the following command: For Red Hat Enterprise Linux 7, enable the first boot and initial-setup wizards by running the following commands: Note The wizards that run on the boot depend on the configurations that have been removed from the virtual machine. In addition, on the first boot of the clone, it is recommended that you change the host name. | [
"rm -f /etc/udev/rules.d/70-persistent-net.rules",
"DEVICE=eth[x] BOOTPROTO=none ONBOOT=yes #NETWORK=10.0.1.0 <- REMOVE #NETMASK=255.255.255.0 <- REMOVE #IPADDR=10.0.1.20 <- REMOVE #HWADDR=xx:xx:xx:xx:xx <- REMOVE #USERCTL=no <- REMOVE Remove any other *unique* or non-desired settings, such as UUID.",
"DEVICE=eth[x] BOOTPROTO=dhcp ONBOOT=yes",
"DEVICE=eth[x] ONBOOT=yes",
"rm /etc/sysconfig/rhn/systemid",
"subscription-manager unsubscribe --all subscription-manager unregister subscription-manager clean",
"subscription-manager clean",
"rm -rf /etc/ssh/ssh_host_*",
"touch /.unconfigured",
"sed -ie 's/RUN_FIRSTBOOT=NO/RUN_FIRSTBOOT=YES/' /etc/sysconfig/firstboot systemctl enable firstboot-graphical systemctl enable initial-setup-graphical"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/cloning_virtual_machines |
Chapter 3. BareMetalHost [metal3.io/v1alpha1] | Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost. status object BareMetalHostStatus defines the observed state of BareMetalHost. 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost. Type object Required online Property Type Description architecture string CPU architecture of the host, e.g. "x86_64" or "aarch64". If unset, eventually populated by inspection. automatedCleaningMode string When set to disabled, automated cleaning will be avoided during provisioning and deprovisioning. bmc object How do we connect to the BMC? bootMACAddress string Which MAC address will PXE boot? This is optional for some types, but required for libvirt VMs driven by vbmc. bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". customDeploy object A custom deploy procedure. description string Description is a human-entered text used to help identify the host externallyProvisioned boolean ExternallyProvisioned means something else is managing the image running on the host and the operator should only manage the power status and hardware inventory inspection. If the Image field is filled in, this field is ignored. firmware object BIOS configuration for bare metal server hardwareProfile string What is the name of the hardware profile for this host? Hardware profiles are deprecated and should not be used. Use the separate fields Architecture and RootDeviceHints instead. Set to "empty" to prepare for the future version of the API without hardware profiles. image object Image holds the details of the image to be provisioned. metaData object MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. networkData object NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. online boolean Should the server be online? preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration (e.g content of network_data.json) which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. 3.1.2. .spec.bmc Description How do we connect to the BMC? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description BIOS configuration for bare metal server Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost. Type object Required errorCount errorMessage hardwareProfile operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string the last error message reported by the provisioning subsystem errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object the last credentials we were able to validate as working hardware object The hardware discovered to exist on the host. hardwareProfile string The name of the profile matching the hardware details. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean indicator for whether or not the host is powered on provisioning object Information tracked by the provisioner. triedCredentials object the last credentials we sent to the provisioning backend 3.1.15. .status.goodCredentials Description the last credentials we were able to validate as working Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 3.1.18. .status.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN. 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN. Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description alternateNames array (string) A list of alternate Linux device names of the disk, e.g. "/dev/sda". Note that this list is not exhaustive, and names may not be stable across reboots. hctl string The SCSI location of the device model string Hardware model name string A Linux device name of the disk, e.g. "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". This will be a name that is stable across reboots if one is available. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The machine's UUID from the underlying provisioning tool bootMode string BootMode indicates the boot mode used to provision the node customDeploy object Custom deploy procedure applied to the host. firmware object The Bios set by the user image object Image holds the details of the last image successfully provisioned to the host. raid object The Raid set by the user rootDeviceHints object The RootDevicehints set by the user state string An indiciator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The Bios set by the user Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image, e.g md5, sha256 or sha512. The special value "auto" can be used to detect the algorithm from the checksum. If missing, MD5 is used. If in doubt, use "auto". format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The Raid set by the user Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The RootDevicehints set by the user Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description the last credentials we sent to the provisioning backend Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts HTTP method GET Description list objects of kind BareMetalHost Table 3.1. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts HTTP method DELETE Description delete collection of BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.3. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body BareMetalHost schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method DELETE Description delete a BareMetalHost Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.10. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body BareMetalHost schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the BareMetalHost HTTP method GET Description read status of the specified BareMetalHost Table 3.17. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body BareMetalHost schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1 |
Chapter 36. Kernel | Chapter 36. Kernel Some ext4 file systems cannot be resized Due to a bug in the ext4 code, it is currently impossible to resize ext4 file systems that have 1 kilobyte block size and are smaller than 32 megabytes. Repeated connection loss with iSER-enabled iSCSI targets When using the server as an iSER-enabled iSCSI target, connection losses occur repeatedly, the target can stop responding and the kernel becomes unresponsive. To work around this issue, minimize iSER connection losses or revert to non-iSER iSCSI mode. Installer does not detect Fibre Channel over Ethernet disks on EDD systems On EDD systems, FCoE disks are not detected automatically by Anaconda due to the edd driver missing. This makes such disks unusable during the installation. To work around this problem, perform the following steps: * Add fcoe=edd:nodcb to the kernel command line during the installation, the FCoE disks will be detected by anaconda. * Add fcoe=edd:nodcb to the rescue image and boot the system with it. * Add the edd module to the initrd image by executing the following commands: #dracut --regenerate-all -f #dracut --add-drivers edd /boot/initramfs-3.10.0-123.el7.x86_64.img * Reboot the system with the default boot menu entry NUMA balancing does not work optimally under certain circumstances The Linux kernel Non-Uniform Memory Access (NUMA) balancing does not work optimally under the following condition in Red Hat Enterprise Linux 7. When the numa_balancing option is set, some of the memory can move to an arbitrary non-destination node before moving to the constrained nodes, and the memory on the destination node also decreases under certain circumstances. There is currently no known workaround available. PSM2 MTL disabled to avoid conflicts between PSM and PSM2 APIs The new libpsm2 package provides the PSM2 API for use with Intel Omni-Path devices, which overlaps with the Performance Scaled Messaging (PSM) API installed by the infinipath-psm package for use with Truescale devices. The API overlap results in undefined behavior when a process links to libraries provided by both packages. This problem affects Open MPI if the set of its enabled MCA modules includes the psm2 Matching Transport Layer (MTL) and one or more modules that directly or indirectly depend on the libpsm_infinipath.so.1 library from the infinipath-psm package. To avoid the PSM and PSM2 API conflict, Open MPI's psm2 MTL has been disabled by default in the /etc/openmpi-*/openmpi-mca-params.conf configuration file. If you enable it, you need to disable the psm and ofi MTLs and the usnic Byte Transfer Layer (BTL) that conflict with it (instructions are also provided in comments in the configuration file). There is also a packaging conflict between the libpsm2-compat-devel and infinipath-psm-devel packages because they both contain PSM header files. Therefore, the two packages cannot be installed at the same time. To install one, uninstall the other. Performance problem of the perf utility The perf archive command, which creates archives with object files with build IDs found in perf.data files, takes a long time to complete on IBM System z. At present, no known workaround exists. Other architectures are not affected. qlcnic fails to enslaved by bonding Certain bonding modes set a MAC address on the device which the qlcnic driver does not properly recognize. This prevents the device from restoring its original MAC address when it is removed from the bond. As a workaround, unenslave the qlcnic driver and reboot your operating system. Installation fails on some 64-bit ARM Applied Micro computers Red Hat Enterprise Linux 7.2 fails to install on certain 64-bit ARM systems by Applied Micro with the following error message: Unable to handle kernel NULL pointer dereference at virtual address 0000033f At present, there is no workaround for this problem. libvirt management of VFIO devices can lead to host crashes The libvirt management of host PCI devices, assigned to guests using the VFIO driver, can lead to host kernel drivers and the vfio-pci driver binding simultaneously to devices in the same IOMMU group. This is an invalid state, which can lead to a host unexpected termination. For now, the only workaround is to never hot-unplug a VFIO device from a guest, if there are any other devices in the same IOMMU group. Installation using iSCSI and IPv6 hangs for 15 minutes Dracut times out after trying to connect to the specified iSCSI server for 15 minutes if IPv6 is enabled. Eventually, Dracut connects successfully and proceeds as expected; however, to avoid the delay, use ip=eth0:auto6 on the installer's command line. i40e NIC freeze With old firmware, a network card using the i40e driver becomes unusable for about ten seconds after it enters the promiscuous mode. To avoid this problem, update the firmware. i40e is issuing WARN_ON The i40e driver is issuing the WARN_ON macro during ring size changes because the code is cloning the rx_ring struct but not zeroing out the pointers before allocating new memory. Note that this warning is harmless to your system. netprio_cgroups not mounted at boot Currently, systemd mounts the /sys/fs/cgroup/ directory as read-only, which prevents the default mount of the /sys/fs/cgroup/net_prio/ directory. As a consequence, the netprio_cgroups module is not mounted at boot. To work around this problem, use the mount -o remount command, followed by rw -t cgroup nodev /sys/fs/cgroups . This makes it possible to install module-based cgroups manually. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/known-issues-kernel |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/proc_providing-feedback-on-red-hat-documentation_configuring-and-managing-virtualization |
Argo CD instance | Argo CD instance Red Hat OpenShift GitOps 1.11 Installing and deploying Argo CD instances Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/argo_cd_instance/index |
Chapter 2. Understanding networking | Chapter 2. Understanding networking Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: Service types, such as node ports or load balancers API resources, such as Ingress and Route By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Note Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block. This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true . If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. 2.1. OpenShift Container Platform DNS If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable. For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. 2.2. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 2.2.1. Comparing routes and Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy , which is an open source load balancer solution. The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource. 2.3. Glossary of common terms for OpenShift Container Platform networking This glossary defines common terms that are used in the networking content. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. AWS Load Balancer Operator The AWS Load Balancer (ALB) Operator deploys and manages an instance of the aws-load-balancer-controller . Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. custom resource (CR) A CR is extension of the Kubernetes API. You can create custom resources. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. deployment A Kubernetes resource object that maintains the life cycle of an application. domain Domain is a DNS name serviced by the Ingress Controller. egress The process of data sharing externally through a network's outbound traffic from a pod. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. HTTP-based route An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. Ingress Controller The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. kube-proxy Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. load balancers OpenShift Container Platform uses load balancers for communicating from outside the cluster with services running in the cluster. MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. multicast With IP multicast, data is broadcast to many IP addresses simultaneously. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of a OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Ingress Operator The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform services. pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. PTP Operator The PTP Operator creates and manages the linuxptp services. route The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. scaling Increasing or decreasing the resource capacity. service Exposes a running application on a set of pods. Single Root I/O Virtualization (SR-IOV) Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. software-defined networking (SDN) OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. Note OpenShift SDN CNI is deprecated as of OpenShift Container Platform 4.14. As of OpenShift Container Platform 4.15, the network plugin is not an option for new installations. In a subsequent future release, the OpenShift SDN network plugin is planned to be removed and no longer supported. Red Hat will provide bug fixes and support for this feature until it is removed, but this feature will no longer receive enhancements. As an alternative to OpenShift SDN CNI, you can use OVN Kubernetes CNI instead. For more information, see OpenShift SDN CNI removal . Stream Control Transmission Protocol (SCTP) SCTP is a reliable message based protocol that runs on top of an IP network. taint Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. web console A user interface (UI) to manage OpenShift Container Platform. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/understanding-networking |
26.4. Changing the Certificate Chain | 26.4. Changing the Certificate Chain You can modify the certificate chain by renewing the CA certificate using the ipa-cacert-manage renew . Self-signed CA certificate externally-signed CA certificate Add the --external-ca option to ipa-cacert-manage renew . This renews the self-signed CA certificate as an externally-signed CA certificate. For details on running the command with this option, see Section 26.2.2, "Renewing CA Certificates Manually" . Externally-signed CA certificate self-signed CA certificate Add the --self-signed option to ipa-cacert-manage renew . This renew the externally-signed CA certificate as a self-signed CA certificate. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/change-cert-chaining |
Load Balancer Administration | Load Balancer Administration Red Hat Enterprise Linux 7 Configuring Keepalived and HAProxy Steven Levine Red Hat Customer Content Services [email protected] Stephen Wadeley Red Hat Customer Content Services [email protected] | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/index |
Chapter 6. MachineConfig [machineconfiguration.openshift.io/v1] | Chapter 6. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigSpec is the spec for MachineConfig 6.1.1. .spec Description MachineConfigSpec is the spec for MachineConfig Type object Property Type Description baseOSExtensionsContainerImage string BaseOSExtensionsContainerImage specifies the remote location that will be used to fetch the extensions container matching a new-format OS image config `` Config is a Ignition Config object. extensions array (string) extensions contains a list of additional features that can be enabled on host fips boolean fips controls FIPS mode kernelArguments `` kernelArguments contains a list of kernel arguments to be added kernelType string kernelType contains which kernel we want to be running like default (traditional), realtime, 64k-pages (aarch64 only). osImageURL string OSImageURL specifies the remote location that will be used to fetch the OS. 6.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigs DELETE : delete collection of MachineConfig GET : list objects of kind MachineConfig POST : create a MachineConfig /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} DELETE : delete a MachineConfig GET : read the specified MachineConfig PATCH : partially update the specified MachineConfig PUT : replace the specified MachineConfig 6.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigs HTTP method DELETE Description delete collection of MachineConfig Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfig Table 6.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfig Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body MachineConfig schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 202 - Accepted MachineConfig schema 401 - Unauthorized Empty 6.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigs/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the MachineConfig HTTP method DELETE Description delete a MachineConfig Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfig Table 6.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfig Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfig Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body MachineConfig schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfig schema 201 - Created MachineConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_apis/machineconfig-machineconfiguration-openshift-io-v1 |
probe::ipmib.FragOKs | probe::ipmib.FragOKs Name probe::ipmib.FragOKs - Count datagram fragmented successfully Synopsis ipmib.FragOKs Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global FragOKs (equivalent to SNMP's MIB IPSTATS_MIB_FRAGOKS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-fragoks |
B.38.10. RHBA-2011:1495 - kernel bug fix update | B.38.10. RHBA-2011:1495 - kernel bug fix update Updated kernel packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fix BZ# 751081 When a host was in recovery mode and a SCSI scan operation was initiated, the scan operation failed and provided no error output. With this update, the underlying code has been modified, and the SCSI layer now waits for recovery of the host to complete scan operations for devices. All users of kernel are advised to upgrade to these updated packages, which fix this bug. The system must be rebooted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-1495 |
4.89. ibus-table-erbi | 4.89. ibus-table-erbi 4.89.1. RHBA-2011:1274 - ibus-table-erbi bug fix update An updated ibus-table-erbi package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The ibus-table-erbi provides the Simplified Chinese input method, ErBi. Bug Fixes BZ# 712805 Prior to this update, the ibus-table-erbi spec file contained a redundant line which printed the debug message "/usr/share/ibus-table/tables" at the end of installation. The line indicated the working directory of the post-install script and has been removed to fix the problem. BZ# 729906 Previously, the table index was updated when running the post-install script of the ibus-table-erbi package. This modified the size of the files, the MD5 Message-Digest Algorithm checksum and the access time of database files. As a consequence, the "rpm -V" command failed with false positive warnings of the aforementioned changes due to the changes not matching the values in the package metadata. This has been fixed: files that are expected to be modified when running the post-install script are now specified with correct verify flags in the spec file. All users of ibus-table-erbi are advised to upgrade to this updated package, which resolves these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/ibus-table-erbi |
Chapter 67. resource | Chapter 67. resource This chapter describes the commands under the resource command. 67.1. resource member create Shares a resource to another project. Usage: Table 67.1. Positional Arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. member_id Project id to whom the resource is shared to. Table 67.2. Optional Arguments Value Summary -h, --help Show this help message and exit Table 67.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 67.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 67.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 67.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.2. resource member delete Delete a resource sharing relationship. Usage: Table 67.7. Positional Arguments Value Summary resource Resource id to be shared. resource_type Resource type. member_id Project id to whom the resource is shared to. Table 67.8. Optional Arguments Value Summary -h, --help Show this help message and exit 67.3. resource member list List all members. Usage: Table 67.9. Positional Arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. Table 67.10. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 67.11. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 67.12. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 67.13. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 67.14. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.4. resource member show Show specific member information. Usage: Table 67.15. Positional Arguments Value Summary resource Resource id to be shared. resource_type Resource type. Table 67.16. Optional Arguments Value Summary -h, --help Show this help message and exit -m MEMBER_ID, --member-id MEMBER_ID Project id to whom the resource is shared to. no need to provide this param if you are the resource member. Table 67.17. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 67.18. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 67.19. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 67.20. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 67.5. resource member update Update resource sharing status. Usage: Table 67.21. Positional Arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. Table 67.22. Optional Arguments Value Summary -h, --help Show this help message and exit -m MEMBER_ID, --member-id MEMBER_ID Project id to whom the resource is shared to. no need to provide this param if you are the resource member. -s {pending,accepted,rejected}, --status {pending,accepted,rejected} Status of the sharing. Table 67.23. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 67.24. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 67.25. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 67.26. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack resource member create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] resource_id resource_type member_id",
"openstack resource member delete [-h] resource resource_type member_id",
"openstack resource member list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] resource_id resource_type",
"openstack resource member show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-m MEMBER_ID] resource resource_type",
"openstack resource member update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-m MEMBER_ID] [-s {pending,accepted,rejected}] resource_id resource_type"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/resource |
function::pgrp | function::pgrp Name function::pgrp - Returns the process group ID of the current process Synopsis Arguments None Description This function returns the process group ID of the current process. | [
"pgrp:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-pgrp |
Chapter 4. Container images with Rust Toolset on RHEL 8 | Chapter 4. Container images with Rust Toolset on RHEL 8 On RHEL 8, you can build your own Rust Toolset container images on top of Red Hat Universal Base Images (UBI) containers using Containerfiles. 4.1. Creating a container image of Rust Toolset on RHEL 8 On RHEL 8, Rust Toolset packages are part of the Red Hat Universal Base Images (UBIs) repositories. To keep the container size small, install only individual packages instead of the entire Rust Toolset. Prerequisites An existing Containerfile. For more information on creating Containerfiles, see the Dockerfile reference page. Procedure Visit the Red Hat Container Catalog . Select a UBI. Click Get this image and follow the instructions. To create a container containing Rust Toolset, add the following lines to your Containerfile: To create a container image containing an individual package only, add the following lines to your Containerfile: Replace < package_name > with the name of the package you want to install. 4.2. Additional resources For more information on Red Hat UBI images, see Working with Container Images . For more information on Red Hat UBI repositories, see Universal Base Images (UBI): Images, repositories, packages, and source code . | [
"FROM registry.access.redhat.com/ubi8/ubi: latest RUN yum install -y rust-toolset",
"RUN yum install < package-name >"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.75.0_toolset/assembly_container-images-with-comp-toolset |
Overview, concepts, and deployment considerations | Overview, concepts, and deployment considerations Red Hat Satellite 6.15 Explore the Satellite architecture and plan Satellite deployment Red Hat Satellite Documentation Team [email protected] | [
"Global > Organization > Location > Domain > Host group > Host",
"satellite-maintain packages update grub2-efi satellite-installer",
"satellite-maintain packages update grub2-efi satellite-installer"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/overview_concepts_and_deployment_considerations/index |
B.2. Using RPM | B.2. Using RPM RPM has five basic modes of operation (not counting package building): installing, uninstalling, upgrading, querying, and verifying. This section contains an overview of each mode. For complete details and options, try rpm --help or man rpm . You can also see Section B.5, "Additional Resources" for more information on RPM. B.2.1. Finding RPM Packages Before using any RPM packages, you must know where to find them. An Internet search returns many RPM repositories, but if you are looking for Red Hat RPM packages, they can be found at the following locations: The Red Hat Enterprise Linux installation media contain many installable RPMs. The initial RPM repositories provided with the YUM package manager. See Chapter 8, Yum for details on how to use the official Red Hat Enterprise Linux package repositories. The Extra Packages for Enterprise Linux (EPEL) is a community effort to provide high-quality add-on packages for Red Hat Enterprise Linux. See http://fedoraproject.org/wiki/EPEL for details on EPEL RPM packages. Unofficial, third-party repositories not affiliated with Red Hat also provide RPM packages. Important When considering third-party repositories for use with your Red Hat Enterprise Linux system, pay close attention to the repository's web site with regard to package compatibility before adding the repository as a package source. Alternate package repositories may offer different, incompatible versions of the same software, including packages already included in the Red Hat Enterprise Linux repositories. The Red Hat Errata Page, available at http://www.redhat.com/apps/support/errata/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-rpm-using |
Observability overview | Observability overview OpenShift Container Platform 4.18 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/observability_overview/index |
8.12. Network & Hostname | 8.12. Network & Hostname To configure essential networking features for your system, select Network & Hostname at the Installation Summary screen. Important When the installation finishes and the system boots for the first time, any network interfaces which you configured during the installation will be activated. However, the installation does not prompt you to configure network interfaces on some common installation paths - for example, when you install Red Hat Enterprise Linux from a DVD to a local hard drive. When you install Red Hat Enterprise Linux from a local installation source to a local storage device, be sure to configure at least one network interface manually if you require network access when the system boots for the first time. You will also need to set the connection to connect automatically after boot when editing the configuration. Locally accessible interfaces are automatically detected by the installation program and cannot be manually added or deleted. The detected interfaces are listed in the left pane. Click an interface in the list to display more details about in on the right. To activate or deactivate a network interface, move the switch in the top right corner of the screen to either ON or OFF . Note There are several types of network device naming standards used to identify network devices with persistent names such as em1 or wl3sp0 . For information about these standards, see the Red Hat Enterprise Linux 7 Networking Guide . Figure 8.10. Network & Hostname Configuration Screen Below the list of connections, enter a host name for this computer in the Hostname input field. The host name can be either a fully-qualified domain name (FQDN) in the format hostname . domainname or a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, only specify the short host name. The value localhost.localdomain means that no specific static host name for target system is configured, and the actual host name of installed system will be configured during process of network configuration (for example, by NetworkManager using DHCP or DNS). Important If you want to manually assign the host name, make sure you do not use a domain name that is not delegated to you, as this can result in network resources becoming unavailable. For more information, see the recommended naming practices in the Red Hat Enterprise Linux 7 Networking Guide . Note You can use the Network section of the system Settings dialog to change your network configuration after you have completed the installation. Once you have finished network configuration, click Done to return to the Installation Summary screen. 8.12.1. Edit Network Connections This section only details the most important settings for a typical wired connection used during installation. Many of the available options do not have to be changed in most installation scenarios and are not carried over to the installed system. Configuration of other types of network is broadly similar, although the specific configuration parameters are necessarily different. To learn more about network configuration after installation, see the Red Hat Enterprise Linux 7 Networking Guide . To configure a network connection manually, click the Configure button in the lower right corner of the screen. A dialog appears that allows you to configure the selected connection. The configuration options presented depends on whether the connection is wired, wireless, mobile broadband, VPN, or DSL. If required, see the Networking Guide for more detailed information on network settings. The most useful network configuration options to consider during installation are: Mark the Automatically connect to this network when it is available check box if you want to use the connection every time the system boots. You can use more than one connection that will connect automatically. This setting will carry over to the installed system. Figure 8.11. Network Auto-Connection Feature By default, IPv4 parameters are configured automatically by the DHCP service on the network. At the same time, the IPv6 configuration is set to the Automatic method. This combination is suitable for most installation scenarios and usually does not require any changes. Figure 8.12. IP Protocol Settings When you have finished editing network settings, click Save to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device in order to use the new configuration in the installation environment. Use the ON/OFF switch on the Network & Host Name screen to restart the device. 8.12.2. Advanced Network Interfaces Advanced network interfaces are also available for installation. This includes virtual local area networks ( VLAN s) and three methods to use aggregated links. Detailed description of these interfaces is beyond the scope of this document; read the Red Hat Enterprise Linux 7 Networking Guide for more information. To create an advanced network interface, click the + button in the lower left corner of the Network & Hostname screen. Figure 8.13. Network & Hostname Configuration Screen A dialog appears with a drop-down menu with the following options: Bond - represents NIC ( Network Interface Controller ) Bonding, a method to bind multiple network interfaces together into a single, bonded, channel. Bridge - represents NIC Bridging, a method to connect multiple separate network into one aggregate network. Team - represents NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. VLAN - represents a method to create multiple distinct broadcast domains, which are mutually isolated. Figure 8.14. Advanced Network Interface Dialog Note Note that locally accessible interfaces, wired or wireless, are automatically detected by the installation program and cannot be manually added or deleted by using these controls. Once you have selected an option and clicked the Add button, another dialog appears for you to configure the new interface. See the respective chapters in the Red Hat Enterprise Linux 7 Networking Guide for detailed instructions. To edit configuration on an existing advanced interface, click the Configure button in the lower right corner of the screen. You can also remove a manually-added interface by clicking the - button. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-network-hostname-configuration-x86 |
Getting started with Cryostat | Getting started with Cryostat Red Hat build of Cryostat 2 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/getting_started_with_cryostat/index |
Chapter 20. AWS Load Balancer Operator | Chapter 20. AWS Load Balancer Operator 20.1. AWS Load Balancer Operator release notes The AWS Load Balancer (ALB) Operator deploys and manages an instance of the AWSLoadBalancerController resource. Important The AWS Load Balancer (ALB) Operator is only supported on the x86_64 architecture. These release notes track the development of the AWS Load Balancer Operator in OpenShift Container Platform. For an overview of the AWS Load Balancer Operator, see AWS Load Balancer Operator in OpenShift Container Platform . Note AWS Load Balancer Operator currently does not support AWS GovCloud. 20.1.1. AWS Load Balancer Operator 1.0.0 The AWS Load Balancer Operator is now generally available with this release. The AWS Load Balancer Operator version 1.0.0 supports the AWS Load Balancer Controller version 2.4.4. The following advisory is available for the AWS Load Balancer Operator version 1.0.0: RHEA-2023:1954 Release of AWS Load Balancer Operator on OperatorHub Enhancement Advisory Update 20.1.1.1. Notable changes This release uses the new v1 API version. 20.1.1.2. Bug fixes Previously, the controller provisioned by the AWS Load Balancer Operator did not properly use the configuration for the cluster-wide proxy. These settings are now applied appropriately to the controller. ( OCPBUGS-4052 , OCPBUGS-5295 ) 20.1.2. Earlier versions The two earliest versions of the AWS Load Balancer Operator are available as a Technology Preview. These versions should not be used in a production cluster. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following advisory is available for the AWS Load Balancer Operator version 0.2.0: RHEA-2022:9084 Release of AWS Load Balancer Operator on OperatorHub Enhancement Advisory Update The following advisory is available for the AWS Load Balancer Operator version 0.0.1: RHEA-2022:5780 Release of AWS Load Balancer Operator on OperatorHub Enhancement Advisory Update 20.2. AWS Load Balancer Operator in OpenShift Container Platform The AWS Load Balancer Operator deploys and manages the AWS Load Balancer Controller. You can install the AWS Load Balancer Operator from OperatorHub by using OpenShift Container Platform web console or CLI. 20.2.1. AWS Load Balancer Operator considerations Review the following limitations before installing and using the AWS Load Balancer Operator: The IP traffic mode only works on AWS Elastic Kubernetes Service (EKS). The AWS Load Balancer Operator disables the IP traffic mode for the AWS Load Balancer Controller. As a result of disabling the IP traffic mode, the AWS Load Balancer Controller cannot use the pod readiness gate. The AWS Load Balancer Operator adds command-line flags such as --disable-ingress-class-annotation and --disable-ingress-group-name-annotation to the AWS Load Balancer Controller. Therefore, the AWS Load Balancer Operator does not allow using the kubernetes.io/ingress.class and alb.ingress.kubernetes.io/group.name annotations in the Ingress resource. You have configured the AWS Load Balancer Operator so that the SVC type is NodePort (not LoadBalancer or ClusterIP ). 20.2.2. AWS Load Balancer Operator The AWS Load Balancer Operator can tag the public subnets if the kubernetes.io/role/elb tag is missing. Also, the AWS Load Balancer Operator detects the following information from the underlying AWS cloud: The ID of the virtual private cloud (VPC) on which the cluster hosting the Operator is deployed in. Public and private subnets of the discovered VPC. The AWS Load Balancer Operator supports the Kubernetes service resource of type LoadBalancer by using Network Load Balancer (NLB) with the instance target type only. Prerequisites You must have the AWS credentials secret. The credentials are used to provide subnet tagging and VPC discovery. Procedure You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a Subscription object by running the following command: USD oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' Example output install-zlfbt Check if the status of an install plan is Complete by running the following command: USD oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{"\n"}}' Example output Complete View the status of the aws-load-balancer-operator-controller-manager deployment by running the following command: USD oc get -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-operator-controller-manager 1/1 1 1 23h 20.2.3. AWS Load Balancer Operator logs You can view the AWS Load Balancer Operator logs by using the oc logs command. Procedure View the logs of the AWS Load Balancer Operator by running the following command: USD oc logs -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager -c manager 20.3. Installing the AWS Load Balancer Operator The AWS Load Balancer Operator deploys and manages the AWS Load Balancer Controller. You can install the AWS Load Balancer Operator from the OperatorHub by using OpenShift Container Platform web console or CLI. 20.3.1. Installing the AWS Load Balancer Operator by using the web console You can install the AWS Load Balancer Operator by using the web console. Prerequisites You have logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Your cluster is configured with AWS as the platform type and cloud provider. If you are using a security token service (STS) or user-provisioned infrastructure, follow the related preparation steps. For example, if you are using AWS Security Token Service, see "Preparing for the AWS Load Balancer Operator on a cluster using the AWS Security Token Service (STS)". Procedure Navigate to Operators OperatorHub in the OpenShift Container Platform web console. Select the AWS Load Balancer Operator . You can use the Filter by keyword text box or use the filter list to search for the AWS Load Balancer Operator from the list of Operators. Select the aws-load-balancer-operator namespace. On the Install Operator page, select the following options: Update the channel as stable-v1 . Installation mode as All namespaces on the cluster (default) . Installed Namespace as aws-load-balancer-operator . If the aws-load-balancer-operator namespace does not exist, it gets created during the Operator installation. Select Update approval as Automatic or Manual . By default, the Update approval is set to Automatic . If you select automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator updated to the new version. Click Install . Verification Verify that the AWS Load Balancer Operator shows the Status as Succeeded on the Installed Operators dashboard. 20.3.2. Installing the AWS Load Balancer Operator by using the CLI You can install the AWS Load Balancer Operator by using the CLI. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Your cluster is configured with AWS as the platform type and cloud provider. You are logged into the OpenShift CLI ( oc ). Procedure Create a Namespace object: Create a YAML file that defines the Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: aws-load-balancer-operator Create the Namespace object by running the following command: USD oc apply -f namespace.yaml Create a CredentialsRequest object: Create a YAML file that defines the CredentialsRequest object: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-load-balancer-operator namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - ec2:DescribeSubnets effect: Allow resource: "*" - action: - ec2:CreateTags - ec2:DeleteTags effect: Allow resource: arn:aws:ec2:*:*:subnet/* - action: - ec2:DescribeVpcs effect: Allow resource: "*" secretRef: name: aws-load-balancer-operator namespace: aws-load-balancer-operator serviceAccountNames: - aws-load-balancer-operator-controller-manager Create the CredentialsRequest object by running the following command: USD oc apply -f credentialsrequest.yaml Create an OperatorGroup object: Create a YAML file that defines the OperatorGroup object: Example operatorgroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-lb-operatorgroup namespace: aws-load-balancer-operator spec: upgradeStrategy: Default Create the OperatorGroup object by running the following command: USD oc apply -f operatorgroup.yaml Create a Subscription object: Create a YAML file that defines the Subscription object: Example subscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object by running the following command: USD oc apply -f subscription.yaml Verification Get the name of the install plan from the subscription: USD oc -n aws-load-balancer-operator \ get subscription aws-load-balancer-operator \ --template='{{.status.installplan.name}}{{"\n"}}' Check the status of the install plan: USD oc -n aws-load-balancer-operator \ get ip <install_plan_name> \ --template='{{.status.phase}}{{"\n"}}' The output must be Complete . 20.4. Preparing for the AWS Load Balancer Operator on a cluster using the AWS Security Token Service You can install the AWS Load Balancer Operator on a cluster that uses STS. Follow these steps to prepare your cluster before installing the Operator. The AWS Load Balancer Operator relies on the CredentialsRequest object to bootstrap the Operator and the AWS Load Balancer Controller. The AWS Load Balancer Operator waits until the required secrets are created and available. The Cloud Credential Operator does not provision the secrets automatically in the STS cluster. You must set the credentials secrets manually by using the ccoctl binary. If you do not want to provision credential secret by using the Cloud Credential Operator, you can configure the AWSLoadBalancerController instance on the STS cluster by specifying the credential secret in the AWS load Balancer Controller custom resource (CR). 20.4.1. Bootstrapping AWS Load Balancer Operator on Security Token Service cluster Prerequisites You must extract and prepare the ccoctl binary. Procedure Create the aws-load-balancer-operator namespace by running the following command: USD oc create namespace aws-load-balancer-operator Download the CredentialsRequest custom resource (CR) of the AWS Load Balancer Operator, and create a directory to store it by running the following command: USD curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-credentials-request.yaml Use the ccoctl tool to process CredentialsRequest objects of the AWS Load Balancer Operator, by running the following command: USD ccoctl aws create-iam-roles \ --name <name> --region=<aws_region> \ --credentials-requests-dir=<path-to-credrequests-dir> \ --identity-provider-arn <oidc-arn> Apply the secrets generated in the manifests directory of your cluster by running the following command: USD ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verify that the credentials secret of the AWS Load Balancer Operator is created by running the following command: USD oc -n aws-load-balancer-operator get secret aws-load-balancer-operator --template='{{index .data "credentials"}}' | base64 -d Example output [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::999999999999:role/aws-load-balancer-operator-aws-load-balancer-operator web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token 20.4.2. Configuring AWS Load Balancer Operator on Security Token Service cluster by using managed CredentialsRequest objects Prerequisites You must extract and prepare the ccoctl binary. Procedure The AWS Load Balancer Operator creates the CredentialsRequest object in the openshift-cloud-credential-operator namespace for each AWSLoadBalancerController custom resource (CR). You can extract and save the created CredentialsRequest object in a directory by running the following command: USD oc get credentialsrequest -n openshift-cloud-credential-operator \ aws-load-balancer-controller-<cr-name> -o yaml > <path-to-credrequests-dir>/cr.yaml 1 1 The aws-load-balancer-controller-<cr-name> parameter specifies the credential request name created by the AWS Load Balancer Operator. The cr-name specifies the name of the AWS Load Balancer Controller instance. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory by running the following command: USD ccoctl aws create-iam-roles \ --name <name> --region=<aws_region> \ --credentials-requests-dir=<path-to-credrequests-dir> \ --identity-provider-arn <oidc-arn> Apply the secrets generated in manifests directory to your cluster, by running the following command: USD ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verify that the aws-load-balancer-controller pod is created: USD oc -n aws-load-balancer-operator get pods NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-9b766d6-gg82c 1/1 Running 0 137m aws-load-balancer-operator-controller-manager-b55ff68cc-85jzg 2/2 Running 0 3h26m 20.4.3. Configuring the AWS Load Balancer Operator on Security Token Service cluster by using specific credentials You can specify the credential secret by using the spec.credentials field in the AWS Load Balancer Controller custom resource (CR). You can use the predefined CredentialsRequest object of the controller to know which roles are required. Prerequisites You must extract and prepare the ccoctl binary. Procedure Download the CredentialsRequest custom resource (CR) of the AWS Load Balancer Controller, and create a directory to store it by running the following command: USD curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/controller/controller-credentials-request.yaml Use the ccoctl tool to process the CredentialsRequest object of the controller: USD ccoctl aws create-iam-roles \ --name <name> --region=<aws_region> \ --credentials-requests-dir=<path-to-credrequests-dir> \ --identity-provider-arn <oidc-arn> Apply the secrets to your cluster: USD ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verify the credentials secret has been created for use by the controller: USD oc -n aws-load-balancer-operator get secret aws-load-balancer-controller-manual-cluster --template='{{index .data "credentials"}}' | base64 -d Example output Create the AWSLoadBalancerController resource YAML file, for example, sample-aws-lb-manual-creds.yaml , as follows: apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: credentials: name: <secret-name> 3 1 Defines the AWSLoadBalancerController resource. 2 Defines the AWS Load Balancer Controller instance name. This instance name gets added as a suffix to all related resources. 3 Specifies the secret name containing AWS credentials that the controller uses. 20.4.4. Additional resources Configuring the Cloud Credential Operator utility 20.5. Creating an instance of the AWS Load Balancer Controller After installing the AWS Load Balancer Operator, you can create the AWS Load Balancer Controller. 20.5.1. Creating the AWS Load Balancer Controller You can install only a single instance of the AWSLoadBalancerController object in a cluster. You can create the AWS Load Balancer Controller by using CLI. The AWS Load Balancer Operator reconciles only the cluster named resource. Prerequisites You have created the echoserver namespace. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file that defines the AWSLoadBalancerController object: Example sample-aws-lb.yaml file apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: subnetTagging: Auto 3 additionalResourceTags: 4 - key: example.org/security-scope value: staging ingressClass: alb 5 config: replicas: 2 6 enabledAddons: 7 - AWSWAFv2 8 1 Defines the AWSLoadBalancerController object. 2 Defines the AWS Load Balancer Controller name. This instance name gets added as a suffix to all related resources. 3 Configures the subnet tagging method for the AWS Load Balancer Controller. The following values are valid: Auto : The AWS Load Balancer Operator determines the subnets that belong to the cluster and tags them appropriately. The Operator cannot determine the role correctly if the internal subnet tags are not present on internal subnet. Manual : You manually tag the subnets that belong to the cluster with the appropriate role tags. Use this option if you installed your cluster on user-provided infrastructure. 4 Defines the tags used by the AWS Load Balancer Controller when it provisions AWS resources. 5 Defines the ingress class name. The default value is alb . 6 Specifies the number of replicas of the AWS Load Balancer Controller. 7 Specifies annotations as an add-on for the AWS Load Balancer Controller. 8 Enables the alb.ingress.kubernetes.io/wafv2-acl-arn annotation. Create the AWSLoadBalancerController object by running the following command: USD oc create -f sample-aws-lb.yaml Create a YAML file that defines the Deployment resource: Example sample-aws-lb.yaml file apiVersion: apps/v1 kind: Deployment 1 metadata: name: <echoserver> 2 namespace: echoserver spec: selector: matchLabels: app: echoserver replicas: 3 3 template: metadata: labels: app: echoserver spec: containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 1 Defines the deployment resource. 2 Specifies the deployment name. 3 Specifies the number of replicas of the deployment. Create a YAML file that defines the Service resource: Example service-albo.yaml file: apiVersion: v1 kind: Service 1 metadata: name: <echoserver> 2 namespace: echoserver spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: app: echoserver 1 Defines the service resource. 2 Specifies the service name. Create a YAML file that defines the Ingress resource: Example ingress-albo.yaml file: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <name> 1 namespace: echoserver annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <echoserver> 2 port: number: 80 1 Specify a name for the Ingress resource. 2 Specifies the service name. Verification Save the status of the Ingress resource in the HOST variable by running the following command: USD HOST=USD(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}') Verify the status of the Ingress resource by running the following command: USD curl USDHOST 20.6. Serving multiple ingress resources through a single AWS Load Balancer You can route the traffic to different services that are part of a single domain through a single AWS Load Balancer. Each Ingress resource provides different endpoints of the domain. 20.6.1. Creating multiple ingress resources through a single AWS Load Balancer You can route the traffic to multiple ingress resources through a single AWS Load Balancer by using the CLI. Prerequisites You have an access to the OpenShift CLI ( oc ). Procedure Create an IngressClassParams resource YAML file, for example, sample-single-lb-params.yaml , as follows: apiVersion: elbv2.k8s.aws/v1beta1 1 kind: IngressClassParams metadata: name: single-lb-params 2 spec: group: name: single-lb 3 1 Defines the API group and version of the IngressClassParams resource. 2 Specifies the IngressClassParams resource name. 3 Specifies the IngressGroup resource name. All of the Ingress resources of this class belong to this IngressGroup . Create the IngressClassParams resource by running the following command: USD oc create -f sample-single-lb-params.yaml Create the IngressClass resource YAML file, for example, sample-single-lb-class.yaml , as follows: apiVersion: networking.k8s.io/v1 1 kind: IngressClass metadata: name: single-lb 2 spec: controller: ingress.k8s.aws/alb 3 parameters: apiGroup: elbv2.k8s.aws 4 kind: IngressClassParams 5 name: single-lb-params 6 1 Defines the API group and version of the IngressClass resource. 2 Specifies the ingress class name. 3 Defines the controller name. The ingress.k8s.aws/alb value denotes that all ingress resources of this class should be managed by the AWS Load Balancer Controller. 4 Defines the API group of the IngressClassParams resource. 5 Defines the resource type of the IngressClassParams resource. 6 Defines the IngressClassParams resource name. Create the IngressClass resource by running the following command: USD oc create -f sample-single-lb-class.yaml Create the AWSLoadBalancerController resource YAML file, for example, sample-single-lb.yaml , as follows: apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: single-lb 1 1 Defines the name of the IngressClass resource. Create the AWSLoadBalancerController resource by running the following command: USD oc create -f sample-single-lb.yaml Create the Ingress resource YAML file, for example, sample-multiple-ingress.yaml , as follows: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-1 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/group.order: "1" 3 alb.ingress.kubernetes.io/target-type: instance 4 spec: ingressClassName: single-lb 5 rules: - host: example.com 6 http: paths: - path: /blog 7 pathType: Prefix backend: service: name: example-1 8 port: number: 80 9 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-2 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: "2" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: /store pathType: Prefix backend: service: name: example-2 port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-3 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: "3" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: example-3 port: number: 80 1 Specifies the ingress name. 2 Indicates the load balancer to provision in the public subnet to access the internet. 3 Specifies the order in which the rules from the multiple ingress resources are matched when the request is received at the load balancer. 4 Indicates that the load balancer will target OpenShift Container Platform nodes to reach the service. 5 Specifies the ingress class that belongs to this ingress. 6 Defines a domain name used for request routing. 7 Defines the path that must route to the service. 8 Defines the service name that serves the endpoint configured in the Ingress resource. 9 Defines the port on the service that serves the endpoint. Create the Ingress resource by running the following command: USD oc create -f sample-multiple-ingress.yaml 20.7. Adding TLS termination You can add TLS termination on the AWS Load Balancer. 20.7.1. Adding TLS termination on the AWS Load Balancer You can route the traffic for the domain to pods of a service and add TLS termination on the AWS Load Balancer. Prerequisites You have an access to the OpenShift CLI ( oc ). Procedure Create a YAML file that defines the AWSLoadBalancerController resource: Example add-tls-termination-albc.yaml file apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: tls-termination 1 1 Defines the ingress class name. If the ingress class is not present in your cluster the AWS Load Balancer Controller creates one. The AWS Load Balancer Controller reconciles the additional ingress class values if spec.controller is set to ingress.k8s.aws/alb . Create a YAML file that defines the Ingress resource: Example add-tls-termination-ingress.yaml file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <example> 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx 3 spec: ingressClassName: tls-termination 4 rules: - host: <example.com> 5 http: paths: - path: / pathType: Exact backend: service: name: <example-service> 6 port: number: 80 1 Specifies the ingress name. 2 The controller provisions the load balancer for ingress in a public subnet to access the load balancer over the internet. 3 The Amazon Resource Name (ARN) of the certificate that you attach to the load balancer. 4 Defines the ingress class name. 5 Defines the domain for traffic routing. 6 Defines the service for traffic routing. 20.8. Configuring cluster-wide proxy You can configure the cluster-wide proxy in the AWS Load Balancer Operator. After configuring the cluster-wide proxy, Operator Lifecycle Manager (OLM) automatically updates all the deployments of the Operators with the environment variables such as HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . These variables are populated to the managed controller by the AWS Load Balancer Operator. 20.8.1. Trusting the certificate authority of the cluster-wide proxy Create the config map to contain the certificate authority (CA) bundle in the aws-load-balancer-operator namespace by running the following command: USD oc -n aws-load-balancer-operator create configmap trusted-ca To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command: USD oc -n aws-load-balancer-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true Update the AWS Load Balancer Operator subscription to access the config map in the AWS Load Balancer Operator deployment by running the following command: USD oc -n aws-load-balancer-operator patch subscription aws-load-balancer-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}],"volumes":[{"name":"trusted-ca","configMap":{"name":"trusted-ca"}}],"volumeMounts":[{"name":"trusted-ca","mountPath":"/etc/pki/tls/certs/albo-tls-ca-bundle.crt","subPath":"ca-bundle.crt"}]}}}' After the AWS Load Balancer Operator is deployed, verify that the CA bundle is added to the aws-load-balancer-operator-controller-manager deployment by running the following command: USD oc -n aws-load-balancer-operator exec deploy/aws-load-balancer-operator-controller-manager -c manager -- bash -c "ls -l /etc/pki/tls/certs/albo-tls-ca-bundle.crt; printenv TRUSTED_CA_CONFIGMAP_NAME" Example output -rw-r--r--. 1 root 1000690000 5875 Jan 11 12:25 /etc/pki/tls/certs/albo-tls-ca-bundle.crt trusted-ca Optional: Restart deployment of the AWS Load Balancer Operator every time the config map changes by running the following command: USD oc -n aws-load-balancer-operator rollout restart deployment/aws-load-balancer-operator-controller-manager 20.8.2. Additional resources Certificate injection using Operators | [
"oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"install-zlfbt",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"Complete",
"oc get -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-operator-controller-manager 1/1 1 1 23h",
"oc logs -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager -c manager",
"apiVersion: v1 kind: Namespace metadata: name: aws-load-balancer-operator",
"oc apply -f namespace.yaml",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-load-balancer-operator namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - ec2:DescribeSubnets effect: Allow resource: \"*\" - action: - ec2:CreateTags - ec2:DeleteTags effect: Allow resource: arn:aws:ec2:*:*:subnet/* - action: - ec2:DescribeVpcs effect: Allow resource: \"*\" secretRef: name: aws-load-balancer-operator namespace: aws-load-balancer-operator serviceAccountNames: - aws-load-balancer-operator-controller-manager",
"oc apply -f credentialsrequest.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-lb-operatorgroup namespace: aws-load-balancer-operator spec: upgradeStrategy: Default",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n aws-load-balancer-operator get subscription aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"oc create namespace aws-load-balancer-operator",
"curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get secret aws-load-balancer-operator --template='{{index .data \"credentials\"}}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::999999999999:role/aws-load-balancer-operator-aws-load-balancer-operator web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"oc get credentialsrequest -n openshift-cloud-credential-operator aws-load-balancer-controller-<cr-name> -o yaml > <path-to-credrequests-dir>/cr.yaml 1",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get pods NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-9b766d6-gg82c 1/1 Running 0 137m aws-load-balancer-operator-controller-manager-b55ff68cc-85jzg 2/2 Running 0 3h26m",
"curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/controller/controller-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get secret aws-load-balancer-controller-manual-cluster --template='{{index .data \"credentials\"}}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::999999999999:role/aws-load-balancer-operator-aws-load-balancer-controller web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: credentials: name: <secret-name> 3",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: subnetTagging: Auto 3 additionalResourceTags: 4 - key: example.org/security-scope value: staging ingressClass: alb 5 config: replicas: 2 6 enabledAddons: 7 - AWSWAFv2 8",
"oc create -f sample-aws-lb.yaml",
"apiVersion: apps/v1 kind: Deployment 1 metadata: name: <echoserver> 2 namespace: echoserver spec: selector: matchLabels: app: echoserver replicas: 3 3 template: metadata: labels: app: echoserver spec: containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080",
"apiVersion: v1 kind: Service 1 metadata: name: <echoserver> 2 namespace: echoserver spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: app: echoserver",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <name> 1 namespace: echoserver annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <echoserver> 2 port: number: 80",
"HOST=USD(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}')",
"curl USDHOST",
"apiVersion: elbv2.k8s.aws/v1beta1 1 kind: IngressClassParams metadata: name: single-lb-params 2 spec: group: name: single-lb 3",
"oc create -f sample-single-lb-params.yaml",
"apiVersion: networking.k8s.io/v1 1 kind: IngressClass metadata: name: single-lb 2 spec: controller: ingress.k8s.aws/alb 3 parameters: apiGroup: elbv2.k8s.aws 4 kind: IngressClassParams 5 name: single-lb-params 6",
"oc create -f sample-single-lb-class.yaml",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: single-lb 1",
"oc create -f sample-single-lb.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-1 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/group.order: \"1\" 3 alb.ingress.kubernetes.io/target-type: instance 4 spec: ingressClassName: single-lb 5 rules: - host: example.com 6 http: paths: - path: /blog 7 pathType: Prefix backend: service: name: example-1 8 port: number: 80 9 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-2 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"2\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: /store pathType: Prefix backend: service: name: example-2 port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-3 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"3\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: example-3 port: number: 80",
"oc create -f sample-multiple-ingress.yaml",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: tls-termination 1",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <example> 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx 3 spec: ingressClassName: tls-termination 4 rules: - host: <example.com> 5 http: paths: - path: / pathType: Exact backend: service: name: <example-service> 6 port: number: 80",
"oc -n aws-load-balancer-operator create configmap trusted-ca",
"oc -n aws-load-balancer-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n aws-load-balancer-operator patch subscription aws-load-balancer-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}],\"volumes\":[{\"name\":\"trusted-ca\",\"configMap\":{\"name\":\"trusted-ca\"}}],\"volumeMounts\":[{\"name\":\"trusted-ca\",\"mountPath\":\"/etc/pki/tls/certs/albo-tls-ca-bundle.crt\",\"subPath\":\"ca-bundle.crt\"}]}}}'",
"oc -n aws-load-balancer-operator exec deploy/aws-load-balancer-operator-controller-manager -c manager -- bash -c \"ls -l /etc/pki/tls/certs/albo-tls-ca-bundle.crt; printenv TRUSTED_CA_CONFIGMAP_NAME\"",
"-rw-r--r--. 1 root 1000690000 5875 Jan 11 12:25 /etc/pki/tls/certs/albo-tls-ca-bundle.crt trusted-ca",
"oc -n aws-load-balancer-operator rollout restart deployment/aws-load-balancer-operator-controller-manager"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/aws-load-balancer-operator-1 |
4.7. Using OpenSSL | 4.7. Using OpenSSL OpenSSL is a library that provides cryptographic protocols to applications. The openssl command line utility enables using the cryptographic functions from the shell. It includes an interactive mode. The openssl command line utility has a number of pseudo-commands to provide information on the commands that the version of openssl installed on the system supports. The pseudo-commands list-standard-commands , list-message-digest-commands , and list-cipher-commands output a list of all standard commands, message digest commands, or cipher commands, respectively, that are available in the present openssl utility. The pseudo-commands list-cipher-algorithms and list-message-digest-algorithms list all cipher and message digest names. The pseudo-command list-public-key-algorithms lists all supported public key algorithms. For example, to list the supported public key algorithms, issue the following command: The pseudo-command no- command-name tests whether a command-name of the specified name is available. Intended for use in shell scripts. See man openssl (1) for more information. 4.7.1. Creating and Managing Encryption Keys With OpenSSL , public keys are derived from the corresponding private key. Therefore the first step, once having decided on the algorithm, is to generate the private key. In these examples the private key is referred to as privkey.pem . For example, to create an RSA private key using default parameters, issue the following command: The RSA algorithm supports the following options: rsa_keygen_bits:numbits - The number of bits in the generated key. If not specified 1024 is used. rsa_keygen_pubexp:value - The RSA public exponent value. This can be a large decimal value, or a hexadecimal value if preceded by 0x . The default value is 65537 . For example, to create a 2048 bit RSA private key using 3 as the public exponent, issue the following command: To encrypt the private key as it is output using 128 bit AES and the passphrase " hello " , issue the following command: See man genpkey (1) for more information on generating private keys. 4.7.2. Generating Certificates To generate a certificate using OpenSSL , it is necessary to have a private key available. In these examples the private key is referred to as privkey.pem . If you have not yet generated a private key, see Section 4.7.1, "Creating and Managing Encryption Keys" To have a certificate signed by a certificate authority ( CA ), it is necessary to generate a certificate and then send it to a CA for signing. This is referred to as a certificate signing request. See Section 4.7.2.1, "Creating a Certificate Signing Request" for more information. The alternative is to create a self-signed certificate. See Section 4.7.2.2, "Creating a Self-signed Certificate" for more information. 4.7.2.1. Creating a Certificate Signing Request To create a certificate for submission to a CA, issue a command in the following format: This will create an X.509 certificate called cert.csr encoded in the default privacy-enhanced electronic mail ( PEM ) format. The name PEM is derived from " Privacy Enhancement for Internet Electronic Mail " described in RFC 1424 . To generate a certificate file in the alternative DER format, use the -outform DER command option. After issuing the above command, you will be prompted for information about you and the organization in order to create a distinguished name ( DN ) for the certificate. You will need the following information: The two letter country code for your country The full name of your state or province City or Town The name of your organization The name of the unit within your organization Your name or the host name of the system Your email address The req (1) man page describes the PKCS# 10 certificate request and generating utility. Default settings used in the certificate creating process are contained within the /etc/pki/tls/openssl.cnf file. See man openssl.cnf(5) for more information. 4.7.2.2. Creating a Self-signed Certificate To generate a self-signed certificate, valid for 366 days, issue a command in the following format: 4.7.2.3. Creating a Certificate Using a Makefile The /etc/pki/tls/certs/ directory contains a Makefile which can be used to create certificates using the make command. To view the usage instructions, issue a command as follows: Alternatively, change to the directory and issue the make command as follows: See the make (1) man page for more information. 4.7.3. Verifying Certificates A certificate signed by a CA is referred to as a trusted certificate. A self-signed certificate is therefore an untrusted certificate. The verify utility uses the same SSL and S/MIME functions to verify a certificate as is used by OpenSSL in normal operation. If an error is found it is reported and then an attempt is made to continue testing in order to report any other errors. To verify multiple individual X.509 certificates in PEM format, issue a command in the following format: To verify a certificate chain the leaf certificate must be in cert.pem and the intermediate certificates which you do not trust must be directly concatenated in untrusted.pem . The trusted root CA certificate must be either among the default CA listed in /etc/pki/tls/certs/ca-bundle.crt or in a cacert.pem file. Then, to verify the chain, issue a command in the following format: See man verify (1) for more information. Important Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 due to insufficient strength of this algorithm. Always use strong algorithms such as SHA256. 4.7.4. Encrypting and Decrypting a File For encrypting (and decrypting) files with OpenSSL , either the pkeyutl or enc built-in commands can be used. With pkeyutl , RSA keys are used to perform the encrypting and decrypting, whereas with enc , symmetric algorithms are used. Using RSA Keys To encrypt a file called plaintext , issue a command as follows: The default format for keys and certificates is PEM. If required, use the -keyform DER option to specify the DER key format. To specify a cryptographic engine, use the -engine option as follows: Where id is the ID of the cryptographic engine. To check the availability of an engine, issue the following command: To sign a data file called plaintext , issue a command as follows: To verify a signed data file and to extract the data, issue a command as follows: To verify the signature, for example using a DSA key, issue a command as follows: The pkeyutl (1) manual page describes the public key algorithm utility. Using Symmetric Algorithms To list available symmetric encryption algorithms, execute the enc command with an unsupported option, such as -l : To specify an algorithm, use its name as an option. For example, to use the aes-128-cbc algorithm, use the following syntax: openssl enc -aes-128-cbc To encrypt a file called plaintext using the aes-128-cbc algorithm, enter the following command: To decrypt the file obtained in the example, use the -d option as in the following example: Important The enc command does not properly support AEAD ciphers, and the ecb mode is not considered secure. For best results, do not use other modes than cbc , cfb , ofb , or ctr . 4.7.5. Generating Message Digests The dgst command produces the message digest of a supplied file or files in hexadecimal form. The command can also be used for digital signing and verification. The message digest command takes the following form: openssl dgst algorithm -out filename -sign private-key Where algorithm is one of md5|md4|md2|sha1|sha|mdc2|ripemd160|dss1 . At time of writing, the SHA1 algorithm is preferred. If you need to sign or verify using DSA, then the dss1 option must be used together with a file containing random data specified by the -rand option. To produce a message digest in the default Hex format using the sha1 algorithm, issue the following command: To digitally sign the digest, using a private key privekey.pem , issue the following command: See man dgst (1) for more information. 4.7.6. Generating Password Hashes The passwd command computes the hash of a password. To compute the hash of a password on the command line, issue a command as follows: The -crypt algorithm is used by default. To compute the hash of a password from standard input, using the MD5 based BSD algorithm 1 , issue a command as follows: The -apr1 option specifies the Apache variant of the BSD algorithm. Note Use the openssl passwd -1 password command only with FIPS mode disabled. Otherwise, the command does not work. To compute the hash of a password stored in a file, and using a salt xx , issue a command as follows: The password is sent to standard output and there is no -out option to specify an output file. The -table will generate a table of password hashes with their corresponding clear text password. See man sslpasswd (1) for more information and examples. 4.7.7. Generating Random Data To generate a file containing random data, using a seed file, issue the following command: Multiple files for seeding the random data process can be specified using the colon, : , as a list separator. See man rand (1) for more information. 4.7.8. Benchmarking Your System To test the computational speed of a system for a given algorithm, issue a command in the following format: where algorithm is one of the supported algorithms you intended to use. To list the available algorithms, type openssl speed and then press tab. 4.7.9. Configuring OpenSSL OpenSSL has a configuration file /etc/pki/tls/openssl.cnf , referred to as the master configuration file, which is read by the OpenSSL library. It is also possible to have individual configuration files for each application. The configuration file contains a number of sections with section names as follows: [ section_name ] . Note the first part of the file, up until the first [ section_name ] , is referred to as the default section. When OpenSSL is searching for names in the configuration file the named sections are searched first. All OpenSSL commands use the master OpenSSL configuration file unless an option is used in the command to specify an alternative configuration file. The configuration file is explained in detail in the config(5) man page. Two RFCs explain the contents of a certificate file. They are: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile Updates to the Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile | [
"~]USD openssl list-public-key-algorithms",
"~]USD openssl genpkey -algorithm RSA -out privkey.pem",
"~]USD openssl genpkey -algorithm RSA -out privkey.pem -pkeyopt rsa_keygen_bits:2048 \\ -pkeyopt rsa_keygen_pubexp:3",
"~]USD openssl genpkey -algorithm RSA -out privkey.pem -aes-128-cbc -pass pass:hello",
"~]USD openssl req -new -key privkey.pem -out cert.csr",
"~]USD openssl req -new -x509 -key privkey.pem -out selfcert.pem -days 366",
"~]USD make -f /etc/pki/tls/certs/Makefile",
"~]USD cd /etc/pki/tls/certs/ ~]USD make",
"~]USD openssl verify cert1.pem cert2.pem",
"~]USD openssl verify -untrusted untrusted.pem -CAfile cacert.pem cert.pem",
"~]USD openssl pkeyutl -in plaintext -out cyphertext -inkey privkey.pem",
"~]USD openssl pkeyutl -in plaintext -out cyphertext -inkey privkey.pem -engine id",
"~]USD openssl engine -t",
"~]USD openssl pkeyutl -sign -in plaintext -out sigtext -inkey privkey.pem",
"~]USD openssl pkeyutl -verifyrecover -in sig -inkey key.pem",
"~]USD openssl pkeyutl -verify -in file -sigfile sig -inkey key.pem",
"~]USD openssl enc -l",
"~]USD openssl enc -aes-128-cbc -in plaintext -out plaintext.aes-128-cbc",
"~]USD openssl enc -aes-128-cbc -d -in plaintext.aes-128-cbc -out plaintext",
"~]USD openssl dgst sha1 -out digest-file",
"~]USD openssl dgst sha1 -out digest-file -sign privkey.pem",
"~]USD openssl passwd password",
"~]USD openssl passwd - 1 password",
"~]USD openssl passwd -salt xx -in password-file",
"~]USD openssl rand -out rand-file -rand seed-file",
"~]USD openssl speed algorithm"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Using_OpenSSL |
Chapter 334. Stub Component | Chapter 334. Stub Component Available as of Camel version 2.10 The stub: component provides a simple way to stub out any physical endpoints while in development or testing, allowing you for example to run a route without needing to actually connect to a specific SMTP or Http endpoint. Just add stub: in front of any endpoint URI to stub out the endpoint. Internally the Stub component creates VM endpoints. The main difference between Stub and VM is that VM will validate the URI and parameters you give it, so putting vm: in front of a typical URI with query arguments will usually fail. Stub won't though, as it basically ignores all query parameters to let you quickly stub out one or more endpoints in your route temporarily. 334.1. URI format Where someUri can be any URI with any query parameters. 334.2. Options The Stub component supports 6 options, which are listed below. Name Description Default Type queueSize (advanced) Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 int concurrentConsumers (consumer) Sets the default number of concurrent threads processing exchanges. 1 int defaultQueueFactory (advanced) Sets the default queue factory. BlockingQueueFactory defaultBlockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean defaultOfferTimeout (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue long resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Stub endpoint is configured using URI syntax: with the following path and query parameters: 334.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of queue String 334.2.2. Query Parameters (17 parameters): Name Description Default Type size (common) The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component. 1000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent threads processing exchanges. 1 int exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern limitConcurrentConsumers (consumer) Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off. true boolean multipleConsumers (consumer) Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. false boolean pollTimeout (consumer) The timeout used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int purgeWhenStopping (consumer) Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. false boolean blockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean discardIfNoConsumers (producer) Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean offerTimeout (producer) offerTimeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value. long timeout (producer) Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. 30000 long waitForTaskToComplete (producer) Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected. IfReplyExpected WaitForTaskToComplete queue (advanced) Define the queue instance which will be used by the endpoint. This option is only for rare use-cases where you want to use a custom queue instance. BlockingQueue synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 334.3. Examples Here are a few samples of stubbing endpoint uris | [
"stub:someUri",
"stub:name",
"stub:smtp://somehost.foo.com?user=whatnot&something=else stub:http://somehost.bar.com/something"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/stub-component |
22.9. Other Management Tools and Operations | 22.9. Other Management Tools and Operations Managing Red Hat JBoss Data Grid instances requires exposing significant amounts of relevant statistical information. This information allows administrators to get a clear view of each JBoss Data Grid node's state. A single installation can comprise of tens or hundreds of JBoss Data Grid nodes and it is important to provide this information in a clear and concise manner. JBoss Operations Network is one example of a tool that provides runtime visibility. Other tools, such as JConsole can be used where JMX is enabled. Report a bug 22.9.1. Accessing Data via URLs Caches that have been configured with a REST interface have access to Red Hat JBoss Data Grid using RESTful HTTP access. The RESTful service only requires a HTTP client library, eliminating the need for tightly coupled client libraries and bindings. For more information about how to retrieve data using the REST interface, see Section 11.6, "Using the REST Interface" . HTTP put() and post() methods place data in the cache, and the URL used determines the cache name and key(s) used. The data is the value placed into the cache, and is placed in the body of the request. A Content-Type header must be set for these methods. GET and HEAD methods are used for data retrieval while other headers control cache settings and behavior. Note It is not possible to have conflicting server modules interact with the data grid. Caches must be configured with a compatible interface in order to have access to JBoss Data Grid. Report a bug 22.9.2. Limitations of Map Methods Specific Map methods, such as size() , values() , keySet() and entrySet() , can be used with certain limitations with Red Hat JBoss Data Grid as they are unreliable. These methods do not acquire locks (global or local) and concurrent modification, additions and removals are excluded from consideration in these calls. The listed methods have a significant impact on performance. As a result, it is recommended that these methods are used for informational and debugging purposes only. Performance Concerns From Red Hat JBoss Data Grid 6.3 onwards, the map methods size() , values() , keySet() , and entrySet() include entries in the cache loader by default whereas previously these methods only included the local data container. The underlying cache loader directly affects the performance of these commands. As an example, when using a database, these methods run a complete scan of the table where data is stored which can result in slower processing. Use Cache.getAdvancedCache().withFlags(Flag.SKIP_CACHE_LOAD).values() to maintain the old behavior and not loading from the cache loader which would avoid the slower performance. Changes to the size() Method (Embedded Caches) In JBoss Data Grid 6.3 the Cache#size() method returned only the number of entries on the local node, ignoring other nodes for clustered caches and including any expired entries. While the default behavior was not changed in JBoss Data Grid 6.4 or later, accurate results can be enabled for bulk operations, including size() , by setting the infinispan.accurate.bulk.ops system property to true. In this mode of operation, the result returned by the size() method is affected by the flags org.infinispan.context.Flag#CACHE_MODE_LOCAL , to force it to return the number of entries present on the local node, and org.infinispan.context.Flag#SKIP_CACHE_LOAD , to ignore any passivated entries. Changes to the size() Method (Remote Caches) In JBoss Data Grid 6.3, the Hot Rod size() method obtained the size of a cache by invoking the STATS operation and using the returned numberOfEntries statistic. This statistic is not an accurate measurement of the number of entries in a cache because it does not take into account expired and passivated entries and it is only local to the node that responded to the operation. As an additional result, when security was enabled, the client would need the ADMIN permission instead of the more appropriate BULK_READ . In JBoss Data Grid 6.4 and later the Hot Rod protocol has been enhanced with a dedicated SIZE operation, and the clients have been updated to use this operation for the size() method. The JBoss Data Grid server will need to be started with the infinispan.accurate.bulk.ops system property set to true so that size can be computed accurately. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-other_management_tools_and_operations |
Chapter 5. Configuring Satellite for performance | Chapter 5. Configuring Satellite for performance Satellite comes with a number of components that communicate with each other. You can tune these components independently of each other to achieve the maximum possible performance for your scenario. 5.1. Applying configurations In following sections we suggest various tunables and how to apply them. Please always test changing these in non production environment first, with valid backup and with proper outage window as in most of the cases Satellite restart is required. It is also a good practice to setup a monitoring before applying any change as it will allow you to evaluate effect of the change. Our testing environment might be too far from what you will see although we are trying hard to mimic real world environment. Changing systemd service files If you have changed some systemd service file, you need to notify systemd daemon to reload the configuration: Restart Satellite services: Changing configuration files If you have changed a configuration file such as /etc/foreman-installer/custom-hiera.yaml , rerun installer to apply your changes: Running installer with additional options If you need to rerun installer with some new options added: Checking basic sanity of the setup Optional: After any change, run this quick Satellite health-check: 5.2. Puma tunings Puma is a ruby application server which is used for serving the Foreman related requests to the clients. For any Satellite configuration that is supposed to handle a large number of clients or frequent operations, it is important for the Puma to be tuned appropriately. 5.2.1. Puma threads Number of Puma threads (per Puma worker) is configured by using two values: threads_min and threads_max . Value of threads_min determines how many threads each worker spawns at a worker start. Then, as concurrent requests are coming and more threads is needed, worker will be spawning more and more workers up to threads_max limit. We recommend setting threads_min to same value as threads_max as having fewer Puma threads lead to higher memory usage on your Satellite Server. For example, we have compared these two setups by using concurrent registrations test: Satellite VM with 8 CPUs, 40 GiB RAM Satellite VM with 8 CPUs, 40 GiB RAM --foreman-foreman-service-puma-threads-min=0 --foreman-foreman-service-puma-threads-min=16 --foreman-foreman-service-puma-threads-max=16 --foreman-foreman-service-puma-threads-max=16 --foreman-foreman-service-puma-workers=2 --foreman-foreman-service-puma-workers=2 Setting the minimum Puma threads to 16 results in about 12% less memory usage as compared to threads_min=0 . 5.2.2. Puma workers and threads auto-tuning If you do not provide any Puma workers and thread values with satellite-installer or they are not present in your Satellite configuration, the satellite-installer configures a balanced number of workers. It follows this formula: This should be fine for most cases, but with some usage patterns tuning is needed to either limit the amount of resources dedicated to Puma (so other Satellite components can use these) or for any other reason. Each Puma worker consumes around 1 GiB of RAM. View your current Satellite Server settings View the currently active Puma workers 5.2.3. Manually tuning Puma workers and threads count If you decide not to rely on Section 5.2.2, "Puma workers and threads auto-tuning" , you can apply custom numbers for these tunables. In the example below we are using 2 workers, 5 and 5 threads: Apply your changes to Satellite Server. For more information, see Section 5.1, "Applying configurations" . 5.2.4. Puma workers and threads recommendations In order to recommend thread and worker configurations for the different tuning profiles, we conducted Puma tuning testing on Satellite with different tuning profiles. The main test used in this testing was concurrent registration with the following combinations along with different number of workers and threads. Our recommendation is based purely on concurrent registration performance, so it might not reflect your exact use-case. For example, if your setup is very content oriented with lots of publishes and promotes, you might want to limit resources consumed by Puma in favor of Pulp and PostgreSQL. Name Number of hosts RAM Cores Recommended Puma Threads for both min & max Recommended Puma Workers default 0 - 5000 20 GiB 4 16 4 - 6 medium 5000 - 10000 32 GiB 8 16 8 - 12 large 10000 - 20000 64 GiB 16 16 12 - 18 extra-large 20000 - 60000 128 GiB 32 16 16 - 24 extra-extra-large 60000+ 256 GiB+ 48+ 16 20 - 26 Tuning number of workers is the more important aspect here and in some case we have seen up to 52% performance increase. Although installer uses 5 min/max threads by default, we recommend 16 threads with all the tuning profiles in the table above. That is because we have seen up to 23% performance increase with 16 threads (14% for 8 and 10% for 32) when compared to setup with 4 threads. To figure out these recommendations we used concurrent registrations test case which is a very specific use-case. It can be different on your Satellite which might have more balanced use-case (not only registrations). Keeping default 5 min/max threads is a good choice as well. These are some of our measurements that lead us to these recommendations: 4 workers, 4 threads 4 workers, 8 threads 4 workers, 16 threads 4 workers, 32 threads Improvement 0% 14% 23% 10% Use 4 - 6 workers on a default setup (4 CPUs) - we have seen about 25% higher performance with 5 workers when compared to 2 workers, but 8% lower performance with 8 workers when compared to 2 workers - see table below: 2 workers, 16 threads 4 workers, 16 threads 6 workers, 16 threads 8 workers, 16 threads Improvement 0% 26% 22% -8% Use 8 - 12 workers on a medium setup (8 CPUs) - see table below: 2 workers, 16 threads 4 workers, 16 threads 8 workers, 16 threads 12 workers, 16 threads 16 workers, 16 threads Improvement 0% 51% 52% 52% 42% Use 16 - 24 workers on a 32 CPUs setup (this was tested on a 90 GiB RAM machine and memory turned out to be a factor here as system started swapping - proper extra-large should have 128 GiB), higher number of workers was problematic for higher registration concurrency levels we tested, so we cannot recommend it. 4 workers, 16 threads 8 workers, 16 threads 16 workers, 16 threads 24 workers, 16 threads 32 workers, 16 threads 48 workers, 16 threads Improvement 0% 37% 44% 52% too many failures too many failures 5.2.5. Configuring Puma workers If you have enough CPUs, adding more workers adds more performance. For example, we have compared Satellite setups with 8 and 16 CPUs: Table 5.1. satellite-installer options used to test effect of workers count Satellite VM with 8 CPUs, 40 GiB RAM Satellite VM with 16 CPUs, 40 GiB RAM --foreman-foreman-service-puma-threads-min=16 --foreman-foreman-service-puma-threads-min=16 --foreman-foreman-service-puma-threads-max=16 --foreman-foreman-service-puma-threads-max=16 --foreman-foreman-service-puma-workers={2|4|8|16} --foreman-foreman-service-puma-workers={2|4|8|16} In 8 CPUs setup, changing the number of workers from 2 to 16, improved concurrent registration time by 36%. In 16 CPUs setup, the same change caused 55% improvement. Adding more workers can also help with total registration concurrency Satellite can handle. In our measurements, setup with 2 workers were able to handle up to 480 concurrent registrations, but adding more workers improved the situation. 5.2.6. Configuring Puma threads More threads allow for lower time to register hosts in parallel. For example, we have compared these two setups: Satellite VM with 8 CPUs, 40 GiB RAM Satellite VM with 8 CPUs, 40 GiB RAM --foreman-foreman-service-puma-threads-min=16 --foreman-foreman-service-puma-threads-min=8 --foreman-foreman-service-puma-threads-max=16 --foreman-foreman-service-puma-threads-max=8 --foreman-foreman-service-puma-workers=2 --foreman-foreman-service-puma-workers=4 Using more workers and the same total number of threads results in about 11% of speedup in highly concurrent registrations scenario. Moreover, adding more workers did not consume more CPU and RAM but gets more performance. 5.2.7. Configuring Puma DB pool The effective value of USDdb_pool is automatically set to equal USDforeman::foreman_service_puma_threads_max . It is the maximum of USDforeman::db_pool and USDforeman::foreman_service_puma_threads_max but both have default value 5, so any increase to the max threads above 5 automatically increases the database connection pool by the same amount. If you encounter ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.006 seconds); all pooled connections were in use error in /var/log/foreman/production.log , you might want to increase this value. View current db_pool setting 5.2.8. Manually tuning db_pool If you decide not to rely on automatically configured value, you can apply custom number like this: Apply your changes to Satellite Server. For more information, see Section 5.1, "Applying configurations" . 5.3. Apache HTTPD performance tuning Apache httpd forms a core part of the Satellite and acts as a web server for handling the requests that are being made through the Satellite web UI or exposed APIs. To increase the concurrency of the operations, httpd forms the first point where tuning can help to boost the performance of your Satellite. 5.3.1. Configuring the open files limit for Apache HTTPD With the tuning in place, Apache httpd can easily open a lot of file descriptors on the server which may exceed the default limit of most of the Linux systems in place. To avoid any kind of issues that may arise as a result of exceeding max open files limit on the system, please create the following file and directory and set the contents of the file as specified in the below given example: Procedure Set the maximum open files limit in /etc/systemd/system/httpd.service.d/limits.conf : Apply your changes to Satellite Server. For more information, see Section 5.1, "Applying configurations" . 5.3.2. Tuning Apache httpd child processes By default, httpd uses event request handling mechanism. When the number of requests to httpd exceeds the maximum number of child processes that can be launched to handle the incoming connections, httpd raises an HTTP 503 Service Unavailable error. Amidst httpd running out of processes to handle, the incoming connections can also result in multiple component failures on your Satellite services side due to the dependency of some components on the availability of httpd processes. You can adapt the configuration of httpd event to handle more concurrent requests based on your expected peak load. Warning Configuring these numbers in custom-hiera.yaml locks them. If you change these numbers using satellite-installer --tuning= My_Tuning_Option , your custom-hiera.yaml will overwrite this setting. Set your numbers only if you have specific need for it. Procedure Modify the number of concurrent requests in /etc/foreman-installer/custom-hiera.yaml by changing or adding the following lines: The example is identical to running satellite-installer --tuning=medium or higher on Satellite Server. Apply your changes to Satellite Server. For more information, see Section 5.1, "Applying configurations" . 5.4. Dynflow tuning Dynflow is the workflow management system and task orchestrator which is a Satellite plugin and is used to execute the different tasks of Satellite in an out-of-order execution manner. Under the conditions when there are a lot of clients checking in on Satellite and running a number of tasks, Dynflow can take some help from an added tuning specifying how many executors can it launch. For more information about the tunings involved related to Dynflow, see https://satellite.example.com/foreman_tasks/sidekiq . Increase number of Sidekiq workers Satellite contains a Dynflow service called dynflow-sidekiq that performs tasks scheduled by Dynflow. Sidekiq workers can be grouped into various queues to ensure lots of tasks of one type will not block execution of tasks of other type. Red Hat recommends to increase the number of sidekiq workers to scale the Foreman tasking system for bulk concurrent tasks, for example for multiple content view publications and promotions, content synchronizations, and synchronizations to Capsule Servers. There are two options available: You can increase the number of threads used by a worker (worker's concurrency). This has limited impact for values larger than five due to Ruby implementation of the concurrency of threads. You can increase the number of workers, which is recommended. Procedure Increase the number of workers from one worker to three while remaining five threads/concurrency of each: Optional: Check if there are three worker services: For more information, see How to add sidekiq workers in Satellite6? . 5.5. Pull-based REX transport tuning Satellite has a pull-based transport mode for remote execution. This transport mode uses MQTT as its messaging protocol and includes an MQTT client running on each host. For more information, see Transport Modes for Remote Execution in Managing hosts . 5.5.1. Increasing host limit for pull-based REX transport You can tune the mosquitto MQTT server and increase the number of hosts connected to it. Procedure Enable pull-based remote execution on your Satellite Server or Capsule Server: Note that your Satellite Server or Capsule Server can only use one transport mode, either SSH or MQTT. Create a config file to increase the default number of hosts accepted by the MQTT service: This example sets the limit to allow the mosquitto service to handle 5000 hosts. Run the following commands to apply your changes: 5.5.2. Decreasing performance impact of the pull-based REX transport When Satellite Server is configured with the pull-based transport mode for remote execution jobs using the Script provider, Capsule Server sends notifications about new jobs to clients through MQTT. This notification does not include the actual workload that the client is supposed to execute. After a client receives a notification about a new remote execution job, it queries Capsule Server for its actual workload. During the job, the client periodically sends outputs of the job to Capsule Server, further increasing the number of requests to Capsule Server. These requests to Capsule Server together with high concurrency allowed by the MQTT protocol can cause exhaustion of available connections on Capsule Server. Some requests might fail, making some child tasks of remote execution jobs unresponsive. This also depends on actual job workload, as some jobs are causing additional load to Satellite Server, making it compete for resources if clients are registered to Satellite Server. To avoid this, configure your Satellite Server and Capsule Server with the following parameters: MQTT Time To Live - Time interval in seconds given to the host to pick up the job before considering the job undelivered MQTT Resend Interval - Time interval in seconds at which the notification should be re-sent to the host until the job is picked up or cancelled MQTT Rate Limit - Number of jobs that are allowed to run at the same time. You can limit the concurrency of remote execution by tuning the rate limit which means you are going to put more load on Satellite. Procedure Tune the MQTT parameters on your Satellite Server: Capsule Server logs are in /var/log/foreman-proxy/proxy.log . Capsule Server uses Webrick HTTP server (no httpd or Puma involved), so there is no simple way to increase its capacity. Note Depending on the workload, number of hosts, available resources, and applied tuning, you might hit the Bug 2244811 , which causes Capsule to consume too much memory and eventually be killed, making the rest of the job fail. At the moment there is no universally applicable workaround. 5.6. PostgreSQL tuning PostgreSQL is the primary SQL based database that is used by Satellite for the storage of persistent context across a wide variety of tasks that Satellite does. The database sees an extensive usage is usually working on to provide the Satellite with the data which it needs for its smooth functioning. This makes PostgreSQL a heavily used process which if tuned can have a number of benefits on the overall operational response of Satellite. The PostgreSQL authors recommend disabling Transparent Hugepage on servers running PostgreSQL. For more information, see Section 4.3, "Disable Transparent Hugepage" . You can apply a set of tunings to PostgreSQL to improve its response times, which will modify the postgresql.conf file. Procedure Append /etc/foreman-installer/custom-hiera.yaml to tune PostgreSQL: postgresql::server::config_entries: max_connections: 1000 shared_buffers: 2GB work_mem: 8MB autovacuum_vacuum_cost_limit: 2000 You can use this to effectively tune down your Satellite instance irrespective of a tuning profile. Apply your changes to Satellite Server. For more information, see Section 5.1, "Applying configurations" . In the above tuning configuration, there are a certain set of keys which we have altered: max_connections : The key defines the maximum number of connections that can be accepted by the PostgreSQL processes that are running. shared_buffers : The shared buffers define the memory used by all the active connections inside PostgreSQL to store the data for the different database operations. An optimal value for this will vary between 2 GiB to a maximum of 25% of your total system memory depending upon the frequency of the operations being conducted on Satellite. work_mem : The work_mem is the memory that is allocated on per process basis for PostgreSQL and is used to store the intermediate results of the operations that are being performed by the process. Setting this value to 8 MB should be more than enough for most of the intensive operations on Satellite. autovacuum_vacuum_cost_limit : The key defines the cost limit value for the vacuuming operation inside the autovacuum process to clean up the dead tuples inside the database relations. The cost limit defines the number of tuples that can be processed in a single run by the process. Red Hat recommends setting the value to 2000 as it is for the medium , large , extra-large , and extra-extra-large profiles, based on the general load that Satellite pushes on the PostgreSQL server process. For more information, see BZ1867311: Upgrade fails when checkpoint_segments postgres parameter configured . 5.6.1. Benchmarking raw DB performance To get a list of the top table sizes in disk space for both Candlepin, Foreman, and Pulp check postgres-size-report script in satellite-support git repository. PGbench utility (note you may need to resize PostgreSQL data directory /var/lib/pgsql directory to 100 GiB or what does benchmark take to run) might be used to measure PostgreSQL performance on your system. Use dnf install postgresql-contrib to install it. For more information, see github.com/RedHatSatellite/satellite-support . Choice of filesystem for PostgreSQL data directory might matter as well. Warning Never do any testing on production system and without valid backup. Before you start testing, see how big the database files are. Testing with a really small database would not produce any meaningful results. For example, if the DB is only 20 GiB and the buffer pool is 32 GiB, it won't show problems with large number of connections because the data will be completely buffered. 5.7. Redis tuning Redis is an in-memory data store. It is used by multiple services in Satellite. The Dynflow and Pulp tasking systems use it to track their tasks. Given the way Satellite uses Redis, its memory consumption should be stable. The Redis authors recommend disabling Transparent Hugepage on servers running Redis. For more about it please see Section 4.3, "Disable Transparent Hugepage" . 5.8. Capsule configuration tuning Capsules are meant to offload part of Satellite load and provide access to different networks related to distributing content to clients but they can also be used to execute remote execution jobs. What they cannot help with is anything which extensively uses Satellite API as host registration or package profile update. 5.8.1. Capsule performance tests We have measured multiple test cases on multiple Capsule configurations: Capsule HW configuration CPUs RAM minimal 4 12 GiB large 8 24 GiB extra large 16 46 GiB Content delivery use case In a download test where we concurrently downloaded a 40MB repo of 2000 packages on 100, 200, .. 1000 hosts, we saw roughly 50% improvement in average download duration every time when we doubled Capsule Server resources. For more precise numbers, see the table below. Concurrent downloading hosts Minimal (4 CPU and 12 GiB RAM) Large (8 CPU and 24 GiB RAM) Large (8 CPU and 24 GiB RAM) Extra Large (16 CPU and 46 GiB RAM) Minimal (4 CPU and 12 GiB RAM) Extra Large (16 CPU and 46 GiB RAM) Average Improvement ~ 50% (e.g. for 700 concurrent downloads in average 9 seconds vs. 4.4 seconds per package) ~ 40% (e.g. for 700 concurrent downloads in average 4.4 seconds vs. 2.5 seconds per package) ~ 70% (e.g. for 700 concurrent downloads in average 9 seconds vs. 2.5 seconds per package) When we compared download performance from Satellite Server vs. from Capsule Server, we have seen only about 5% speedup, but that is expected as Capsule Server's main benefit is in getting content closer to geographically distributed clients (or clients in different networks) and in handling part of the load Satellite Server would have to handle itself. In some smaller hardware configurations (8 CPUs and 24 GiB), Satellite Server was not able to handle downloads from more than 500 concurrent clients, while a Capsule Server with the same hardware configuration was able to service more than 1000 and possibly even more. Concurrent registrations use case For concurrent registrations, a bottleneck is usually CPU speed, but all configs were able to handle even high concurrency without swapping. Hardware resources used for Capsule have only minimal impact on registration performance. For example, Capsule Server with 16 CPUs and 46 GiB RAM have at most a 9% registration speed improvement when compared to a Capsule Server with 4 CPUs and 12 GiB RAM. During periods of very high concurrency, you might experience timeouts in the Capsule Server to Satellite Server communication. You can alleviate this by increasing the default timeout by using the following tunable in /etc/foreman-installer/custom-hiera.yaml : Remote execution use case We have tested executing Remote Execution jobs via both SSH and Ansible backend on 500, 2000 and 4000 hosts. All configurations were able to handle all of the tests without errors, except for the smallest configuration (4 CPUs and 12 GiB memory) which failed to finish on all 4000 hosts. Content sync use case In a sync test where we synced Red Hat Enterprise Linux 6, 7, 8 BaseOS and 8 AppStream we have not seen significant differences among Capsule configurations. This will be different for syncing a higher number of content views in parallel. | [
"systemctl daemon-reload",
"satellite-maintain service restart",
"satellite-installer",
"satellite-installer new options",
"satellite-maintain health check",
"min(CPU_COUNT * 1.5, RAM_IN_GB - 1.5)",
"cat /etc/systemd/system/foreman.service.d/installer.conf",
"systemctl status foreman",
"satellite-installer --foreman-foreman-service-puma-workers=2 --foreman-foreman-service-puma-threads-min=5 --foreman-foreman-service-puma-threads-max=5",
"grep pool /etc/foreman/database.yml pool: 5",
"satellite-installer --foreman-db-pool 10",
"[Service] LimitNOFILE=640000",
"apache::mod::event::serverlimit: 64 apache::mod::event::maxrequestworkers: 1024 apache::mod::event::maxrequestsperchild: 4000",
"satellite-installer --foreman-dynflow-worker-instances 3 # optionally, add --foreman-dynflow-worker-concurrency 5",
"systemctl -a | grep dynflow-sidekiq@worker-[0-9] [email protected] loaded active running Foreman jobs daemon - worker-1 on sidekiq [email protected] loaded active running Foreman jobs daemon - worker-2 on sidekiq [email protected] loaded active running Foreman jobs daemon - worker-3 on sidekiq",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode pull-mqtt",
"cat >/etc/systemd/system/mosquitto.service.d/limits.conf << EOF [Service] LimitNOFILE=5000 EOF",
"systemctl daemon-reload systemctl restart mosquitto.service",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit My_MQTT_Rate_Limit --foreman-proxy-plugin-remote-execution-script-mqtt-resend-interval My_MQTT_Resend_Interval --foreman-proxy-plugin-remote-execution-script-mqtt-ttl My_MQTT_Time_To_Live",
"postgresql::server::config_entries: max_connections: 1000 shared_buffers: 2GB work_mem: 8MB autovacuum_vacuum_cost_limit: 2000",
"apache::mod::proxy::proxy_timeout: 600"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/tuning_performance_of_red_hat_satellite/Configuring_Project_for_Performance_performance-tuning |
Red Hat Discovery Release Notes | Red Hat Discovery Release Notes Subscription Central 1-latest Red Hat Discovery Release Notes Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/red_hat_discovery_release_notes/index |
6.5. Managing Server Roles | 6.5. Managing Server Roles Based on the services installed on an IdM server, it can perform various server roles . For example: CA server, DNS server, or key recovery authority (KRA) server. 6.5.1. Viewing Server Roles Web UI: Viewing Server Roles For a complete list of the supported server roles, see IPA Server Topology Server Roles . Role status absent means that no server in the topology is performing the role. Role status enabled means that one or more servers in the topology are performing the role. Figure 6.14. Server Roles in the Web UI Command Line: Viewing Server Roles The ipa config-show command displays all CA servers, NTP servers, and the current CA renewal master: The ipa server-show command displays a list of roles enabled on a particular server. For example, for a list of roles enabled on server.example.com : The ipa server-find --servrole searches for all servers with a particular server role enabled. For example, to search for all CA servers: 6.5.2. Promoting a Replica to a Master CA Server Note This section describes changing the CA renewal master at domain level 1 (see Chapter 7, Displaying and Raising the Domain Level ). For documentation on changing the CA renewal master at domain level 0, see Section D.4, "Promoting a Replica to a Master CA Server" . If your IdM deployment uses an embedded certificate authority (CA), one of the IdM CA servers acts as the master CA: it manages the renewal of CA subsystem certificates and generates certificate revocation lists (CRLs). By default, the master CA is the first server on which the system administrator installed the CA role using the ipa-server-install or ipa-ca-install command. If you plan to take the master CA server offline or decommission it, promote another CA server to take the its place as the new CA renewal master: Configure the replica to handle CA subsystem certificate renewal. See Section 6.5.2.1, "Changing the Current CA Renewal Master" for domain level 1. See Section D.4.1, "Changing Which Server Handles Certificate Renewal" for domain level 0. Configure the replica to generate CRLs. See Section 6.5.2.2, "Changing Which Server Generates CRLs" . Before decommissioning the master CA server, make sure the new master works properly. See Section 6.5.2.3, "Verifying That the New Master CA Server Is Configured Correctly" . 6.5.2.1. Changing the Current CA Renewal Master Web UI: Changing the Current CA Renewal Master Select IPA Server Configuration . In the IPA CA renewal master field, select the new CA renewal master. Command Line: Changing the Current CA Renewal Master Use the ipa config-mod --ca-renewal-master-server command: The output confirms that the update was successful. 6.5.2.2. Changing Which Server Generates CRLs To change which server generates certificate revocation lists (CRL): If you do not know the current CRL generation master, use the ipa-crlgen-manage status command on each IdM certificate authority (CA) to determine whether CRL generation is enabled: On the current CRL generation master, disable the feature: On the other CA host that you want to configure as the new CRL generation master, enable the feature: 6.5.2.3. Verifying That the New Master CA Server Is Configured Correctly Make sure the /var/lib/ipa/pki-ca/publish/MasterCRL.bin file exists on the new master CA server. The file is generated based on the time interval defined in the /etc/pki/pki-tomcat/ca/CS.cfg file using the ca.crl.MasterCRL.autoUpdateInterval parameter. The default value is 240 minutes (4 hours). Note If you update the ca.crl.MasterCRL.autoUpdateInterval parameter, the change will become effective after the already scheduled CRL update. If the file exists, the new master CA server is configured correctly, and you can safely dismiss the CA master system. 6.5.3. Uninstalling the IdM CA service from an IdM server Red Hat recommends that you have maximun four Identity Management (IdM) replicas with the CA Role in your topology. Therefore, if you have more than four such replicas and performance issues occur due to redundant certificate replication, remove redundant CA service instances from IdM replicas. To do this, you must first decommission the affected IdM replicas completely before re-installing IdM on them, this time without the CA service. Important While you can add the CA role to an IdM replica, IdM does not provide a method to remove only the CA role from an IdM replica: the ipa-ca-install command does not have an --uninstall option. Identify the redundant CA service and follow the procedure in Section 2.4, "Uninstalling an IdM Server" on the IdM replica that hosts this service. On the same host, follow the procedure in Section 2.3.5, "Installing a Server with an External CA as the Root CA" or Section 2.3.6, "Installing Without a CA" , depending on your use case. 6.5.4. Demotion and Promotion of Hidden Replicas After a replica has been installed, you can change whether the replica is hidden or visible: To demote a visible replica to a hidden replica: If the replica is a CA renewal master, move the service to another replica. For details, see Section 6.5.2.1, "Changing the Current CA Renewal Master" . Change the state of the replica to hidden : To promote a hidden replica to a visible replica, enter: Note The hidden replica feature is available in Red Hat Enterprise Linux 7.7 and later as a Technology Preview and, therefore, not supported. | [
"ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA NTP servers: server1.example.com, server2.example.com, server3.example.com IPA CA renewal master: server1.example.com",
"ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, NTP server, KRA server",
"ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------",
"ipa config-mod --ca-renewal-master-server new_ca_renewal_master.example.com IPA masters: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA NTP servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA renewal master: new_ca_renewal_master.example.com",
"ipa-crlgen-manage status CRL generation: enabled",
"ipa-crlgen-manage disable",
"ipa-crlgen-manage enable",
"ipa server-state replica.idm.example.com --state=hidden",
"ipa server-state replica.idm.example.com --state=enabled"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/server-roles |
Evaluating your automation controller job runs using the job explorer | Evaluating your automation controller job runs using the job explorer Red Hat Ansible Automation Platform 2.4 Review jobs and templates in greater detail by applying filters and sorting by attributes Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/evaluating_your_automation_controller_job_runs_using_the_job_explorer/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/logging_monitoring_and_troubleshooting_guide/making-open-source-more-inclusive |
Chapter 5. Identity Brokering APIs | Chapter 5. Identity Brokering APIs Red Hat build of Keycloak can delegate authentication to a parent IDP for login. A typical example of this is the case where you want users to be able to log in through a social provider such as Facebook or Google. You can also link existing accounts to a brokered IDP. This section describes some APIs that your applications can use as it pertains to identity brokering. 5.1. Retrieving external IDP tokens Red Hat build of Keycloak allows you to store tokens and responses from the authentication process with the external IDP. For that, you can use the Store Token configuration option on the IDP's settings page. Application code can retrieve these tokens and responses to pull in extra user information, or to securely invoke requests on the external IDP. For example, an application might want to use the Google token to invoke on other Google services and REST APIs. To retrieve a token for a particular identity provider you need to send a request as follows: An application must have authenticated with Red Hat build of Keycloak and have received an access token. This access token will need to have the broker client-level role read-token set. This means that the user must have a role mapping for this role and the client application must have that role within its scope. In this case, given that you are accessing a protected service in Red Hat build of Keycloak, you need to send the access token issued by Red Hat build of Keycloak during the user authentication. In the broker configuration page you can automatically assign this role to newly imported users by turning on the Stored Tokens Readable switch. These external tokens can be re-established by either logging in again through the provider, or using the client initiated account linking API. 5.2. Client initiated account linking Some applications want to integrate with social providers like Facebook, but do not want to provide an option to login via these social providers. Red Hat build of Keycloak offers a browser-based API that applications can use to link an existing user account to a specific external IDP. This is called client-initiated account linking. Account linking can only be initiated by OIDC applications. The way it works is that the application forwards the user's browser to a URL on the Red Hat build of Keycloak server requesting that it wants to link the user's account to a specific external provider (i.e. Facebook). The server initiates a login with the external provider. The browser logs in at the external provider and is redirected back to the server. The server establishes the link and redirects back to the application with a confirmation. There are some preconditions that must be met by the client application before it can initiate this protocol: The desired identity provider must be configured and enabled for the user's realm in the admin console. The user account must already be logged in as an existing user via the OIDC protocol The user must have an account.manage-account or account.manage-account-links role mapping. The application must be granted the scope for those roles within its access token The application must have access to its access token as it needs information within it to generate the redirect URL. To initiate the login, the application must fabricate a URL and redirect the user's browser to this URL. The URL looks like this: Here's a description of each path and query param: provider This is the provider alias of the external IDP that you defined in the Identity Provider section of the admin console. client_id This is the OIDC client id of your application. When you registered the application as a client in the admin console, you had to specify this client id. redirect_uri This is the application callback URL you want to redirect to after the account link is established. It must be a valid client redirect URI pattern. In other words, it must match one of the valid URL patterns you defined when you registered the client in the admin console. nonce This is a random string that your application must generate hash This is a Base64 URL encoded hash. This hash is generated by Base64 URL encoding a SHA_256 hash of nonce + token.getSessionState() + token.getIssuedFor() + provider . The token variable are obtained from the OIDC access token. Basically you are hashing the random nonce, the user session id, the client id, and the identity provider alias you want to access. Here's an example of Java Servlet code that generates the URL to establish the account link. KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance("SHA-256"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute("hash", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path("/realms/{realm}/broker/{provider}/link") .queryParam("nonce", nonce) .queryParam("hash", hash) .queryParam("client_id", clientId) .queryParam("redirect_uri", redirectUri).build(realm, provider).toString(); Why is this hash included? We do this so that the auth server is guaranteed to know that the client application initiated the request and no other rogue app just randomly asked for a user account to be linked to a specific provider. The auth server will first check to see if the user is logged in by checking the SSO cookie set at login. It will then try to regenerate the hash based on the current login and match it up to the hash sent by the application. After the account has been linked, the auth server will redirect back to the redirect_uri . If there is a problem servicing the link request, the auth server may or may not redirect back to the redirect_uri . The browser may just end up at an error page instead of being redirected back to the application. If there is an error condition and the auth server deems it safe enough to redirect back to the client app, an additional error query parameter will be appended to the redirect_uri . Warning While this API guarantees that the application initiated the request, it does not completely prevent CSRF attacks for this operation. The application is still responsible for guarding against CSRF attacks target at itself. 5.2.1. Refreshing external tokens If you are using the external token generated by logging into the provider (i.e. a Facebook or GitHub token), you can refresh this token by re-initiating the account linking API. | [
"GET /realms/{realm}/broker/{provider_alias}/token HTTP/1.1 Host: localhost:8080 Authorization: Bearer <KEYCLOAK ACCESS TOKEN>",
"/{auth-server-root}/realms/{realm}/broker/{provider}/link?client_id={id}&redirect_uri={uri}&nonce={nonce}&hash={hash}",
"KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName()); AccessToken token = session.getToken(); String clientId = token.getIssuedFor(); String nonce = UUID.randomUUID().toString(); MessageDigest md = null; try { md = MessageDigest.getInstance(\"SHA-256\"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } String input = nonce + token.getSessionState() + clientId + provider; byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8)); String hash = Base64Url.encode(check); request.getSession().setAttribute(\"hash\", hash); String redirectUri = ...; String accountLinkUrl = KeycloakUriBuilder.fromUri(authServerRootUrl) .path(\"/realms/{realm}/broker/{provider}/link\") .queryParam(\"nonce\", nonce) .queryParam(\"hash\", hash) .queryParam(\"client_id\", clientId) .queryParam(\"redirect_uri\", redirectUri).build(realm, provider).toString();"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_developer_guide/identity_brokering_apis |
Chapter 20. Configuring Network Encryption in Red Hat Gluster Storage | Chapter 20. Configuring Network Encryption in Red Hat Gluster Storage Network encryption is the process of converting data into a cryptic format or code so that it can be securely transmitted on a network. Encryption prevents unauthorized use of the data. Red Hat Gluster Storage supports network encryption using TLS/SSL. When network encryption is enabled, Red Hat Gluster Storage uses TLS/SSL for authentication and authorization, in place of the authentication framework that is used for non-encrypted connections. The following types of encryption are supported: I/O encryption Encryption of the I/O connections between the Red Hat Gluster Storage clients and servers. Management encryption Encryption of management ( glusterd ) connections within a trusted storage pool, and between glusterd and NFS Ganesha or SMB clients. Network encryption is configured in the following files: /etc/ssl/glusterfs.pem Certificate file containing the system's uniquely signed TLS certificate. This file is unique for each system and must not be shared with others. /etc/ssl/glusterfs.key This file contains the system's unique private key. This file must not be shared with others. /etc/ssl/glusterfs.ca This file contains the certificates of the Certificate Authorities (CA) who have signed the certificates. The glusterfs.ca file must be identical on all servers in the trusted pool, and must contain the certificates of the signing CA for all servers and all clients. All clients should also have a .ca file that contains the certificates of the signing CA for all the servers. Red Hat Gluster Storage does not use the global CA certificates that come with the system, so you need to either create your own self-signed certificates, or create certificates and have them signed by a Certificate Authority. If you are using self-signed certificates, the CA file for the servers is a concatenation of the relevant .pem files of every server and every client. The client CA file is a concatenation of the certificate files of every server. /var/lib/glusterd/secure-access This file is required for management encryption. It enables encryption on the management ( glusterd ) connections between glusterd of all servers and the connection between clients, and contains any configuration required by the Certificate Authority. The glusterd service of all servers uses this file to fetch volfiles and notify the clients with the volfile changes. This file must be present on all servers and all clients for management encryption to work correctly. It can be empty, but most configurations require at least one line to set the certificate depth ( transport.socket.ssl-cert-depth ) required by the Certificate Authority. 20.1. Preparing Certificates To configure network encryption, each server and client needs a signed certificate and a private key. There are two options for certificates. Self-signed certificate Generating and signing the certificate yourself. Certificate Authority (CA) signed certificate Generating the certificate and then requesting that a Certificate Authority sign it. Both of these options ensure that data transmitted over the network cannot be accessed by a third party, but certificates signed by a Certificate Authority imply an added level of trust and verification to a customer using your storage. Procedure 20.1. Preparing a self-signed certificate Generate and sign certificates for each server and client Perform the following steps on each server and client. Generate a private key for this machine Generate a self-signed certificate for this machine The following command generates a signed certificate that expires in 365 days, instead of the default 30 days. Provide a short name for this machine in place of COMMONNAME . This is generally a hostname, FQDN, or IP address. Generate client-side certificate authority lists From the first server, concatenate the /etc/ssl/glusterfs.pem files from all servers into a single file called glusterfs.ca , and place this file in the /etc/ssl directory on all clients. For example, running the following commands from server1 creates a certificate authority list ( .ca file) that contains the certificates ( .pem files) of two servers, and copies the certificate authority list ( .ca file) to three clients. Generate server-side glusterfs.ca files From the first server, append the certificates ( /etc/ssl/glusterfs.pem files) from all clients to the end of the certificate authority list ( /etc/ssl/glusterfs.ca file) generated in the step. For example, running the following commands from server1 appends the certificates ( .pem files) of three clients to the certificate authority list ( .ca file) on server1 , and then copies that certificate authority list ( .ca file) to one other server. Verify server certificates Run the following command in the /etc/ssl directory on the servers to verify the certificate on that machine against the Certificate Authority list. Your certificate is correct if the output of this command is glusterfs.pem: OK . Note This process does not work for self-signed client certificates. Procedure 20.2. Preparing a Common Certificate Authority certificate Perform the following steps on each server and client you wish to authorize. Generate a private key Generate a certificate signing request The following command generates a certificate signing request for a certificate that expires in 365 days, instead of the default 30 days. Provide a short name for this machine in place of COMMONNAME . This is generally a hostname, FQDN, or IP address. Send the generated glusterfs.csr file to your Certificate Authority Your Certificate Authority provides a signed certificate for this machine in the form of a .pem file, and the certificates of the Certificate Authority in the form of a .ca file. Place the .pem file provided by the Certificate Authority Ensure that the .pem file is called glusterfs.pem . Place this file in the /etc/ssl directory of this server only. Place the .ca file provided by the Certificate Authority Ensure that the .ca file is called glusterfs.ca . Place the .ca file in the /etc/ssl directory of all servers. Verify your certificates Run the following command in the /etc/ssl directory on all clients and servers to verify the certificate on that machine against the Certificate Authority list. Your certificate is correct if the output of this command is glusterfs.pem: OK . | [
"openssl genrsa -out /etc/ssl/glusterfs.key 2048",
"openssl req -new -x509 -key /etc/ssl/glusterfs.key -subj \"/CN= COMMONNAME \" -days 365 -out /etc/ssl/glusterfs.pem",
"cat /etc/ssl/glusterfs.pem > /etc/ssl/glusterfs.ca ssh user@server2 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client1:/etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client2:/etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca client3:/etc/ssl/glusterfs.ca",
"ssh user@client1 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca ssh user@client2 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca ssh user@client3 cat /etc/ssl/glusterfs.pem >> /etc/ssl/glusterfs.ca scp /etc/ssl/glusterfs.ca server2:/etc/ssl/glusterfs.ca",
"openssl verify -verbose -CAfile glusterfs.ca glusterfs.pem",
"openssl genrsa -out /etc/ssl/glusterfs.key 2048",
"openssl req -new -sha256 -key /etc/ssl/glusterfs.key -subj '/CN=<COMMONNAME>' -days 365 -out glusterfs.csr",
"openssl verify -verbose -CAfile glusterfs.ca glusterfs.pem"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Network_Encryption |
Chapter 2. Preparing to update a cluster | Chapter 2. Preparing to update a cluster 2.1. Preparing to update to OpenShift Container Platform 4.14 Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update. 2.1.1. RHEL 9.2 micro-architecture requirement change OpenShift Container Platform is now based on the RHEL 9.2 host operating system. The micro-architecture requirements are now increased to x86_64-v2, Power9, and Z14. See the RHEL micro-architecture requirements documentation . You can verify compatibility before updating by following the procedures outlined in this KCS article . Important Without the correct micro-architecture requirements, the update process will fail. Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures 2.1.2. Kubernetes API deprecations and removals OpenShift Container Platform 4.14 uses Kubernetes 1.27, which removed several deprecated APIs. A cluster administrator must provide a manual acknowledgment before the cluster can be updated from OpenShift Container Platform 4.13 to 4.14. This is to help prevent issues after upgrading to OpenShift Container Platform 4.14, where APIs that have been removed are still in use by workloads, tools, or other components running on or interacting with the cluster. Administrators must evaluate their cluster for any APIs in use that will be removed and migrate the affected components to use the appropriate new API version. After this evaluation and migration is complete, the administrator can provide the acknowledgment. Before you can update your OpenShift Container Platform 4.13 cluster to 4.14, you must provide the administrator acknowledgment. 2.1.2.1. Removed Kubernetes APIs OpenShift Container Platform 4.14 uses Kubernetes 1.27, which removed the following deprecated APIs. You must migrate manifests and API clients to use the appropriate API version. For more information about migrating removed APIs, see the Kubernetes documentation . Table 2.1. APIs removed from Kubernetes 1.27 Resource Removed API Migrate to CSIStorageCapacity storage.k8s.io/v1beta1 storage.k8s.io/v1 2.1.2.2. Evaluating your cluster for removed APIs There are several methods to help administrators identify where APIs that will be removed are in use. However, OpenShift Container Platform cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs. 2.1.2.2.1. Reviewing alerts to identify uses of removed APIs Two alerts fire when an API is in use that will be removed in the release: APIRemovedInNextReleaseInUse - for APIs that will be removed in the OpenShift Container Platform release. APIRemovedInNextEUSReleaseInUse - for APIs that will be removed in the OpenShift Container Platform Extended Update Support (EUS) release. If either of these alerts are firing in your cluster, review the alerts and take action to clear the alerts by migrating manifests and API clients to use the new API version. Use the APIRequestCount API to get more information about which APIs are in use and which workloads are using removed APIs, because the alerts do not provide this information. Additionally, some APIs might not trigger these alerts but are still captured by APIRequestCount . The alerts are tuned to be less sensitive to avoid alerting fatigue in production systems. 2.1.2.2.2. Using APIRequestCount to identify uses of removed APIs You can use the APIRequestCount API to track API requests and review whether any of them are using one of the removed APIs. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the REMOVEDINRELEASE column of the output to identify the removed APIs that are currently in use: USD oc get apirequestcounts Example output NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ... csistoragecapacities.v1.storage.k8s.io 14 380 csistoragecapacities.v1beta1.storage.k8s.io 1.27 0 16 custompolicydefinitions.v1beta1.capabilities.3scale.net 8 158 customresourcedefinitions.v1.apiextensions.k8s.io 1407 30148 ... Important You can safely ignore the following entries that appear in the results: The system:serviceaccount:kube-system:generic-garbage-collector and the system:serviceaccount:kube-system:namespace-controller users might appear in the results because these services invoke all registered APIs when searching for resources to remove. The system:kube-controller-manager and system:cluster-policy-controller users might appear in the results because they walk through all resources while enforcing various policies. You can also use -o jsonpath to filter the results: USD oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}' Example output 1.27 csistoragecapacities.v1beta1.storage.k8s.io 1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io 2.1.2.2.3. Using APIRequestCount to identify which workloads are using the removed APIs You can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command and examine the username and userAgent fields to help identify the workloads that are using the API: USD oc get apirequestcounts <resource>.<version>.<group> -o yaml For example: USD oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o yaml You can also use -o jsonpath to extract the username and userAgent values from an APIRequestCount resource: USD oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io \ -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}' \ | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT Example output VERBS USERNAME USERAGENT list watch system:kube-controller-manager cluster-policy-controller/v0.0.0 list watch system:kube-controller-manager kube-controller-manager/v1.26.5+0abcdef list watch system:kube-scheduler kube-scheduler/v1.26.5+0abcdef 2.1.2.3. Migrating instances of removed APIs For information about how to migrate removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation. 2.1.2.4. Providing the administrator acknowledgment After you have evaluated your cluster for any removed APIs and have migrated any removed APIs, you can acknowledge that your cluster is ready to upgrade from OpenShift Container Platform 4.13 to 4.14. Warning Be aware that all responsibility falls on the administrator to ensure that all uses of removed APIs have been resolved and migrated as necessary before providing this administrator acknowledgment. OpenShift Container Platform can assist with the evaluation, but cannot identify all possible uses of removed APIs, especially idle workloads or external tools. Prerequisites You must have access to the cluster as a user with the cluster-admin role. Procedure Run the following command to acknowledge that you have completed the evaluation and your cluster is ready for the Kubernetes API removals in OpenShift Container Platform 4.14: USD oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.13-kube-1.27-api-removals-in-4.14":"true"}}' --type=merge 2.1.3. Assessing the risk of conditional updates A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them. The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update. When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat. However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk: Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment. Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support. Additional resources Evaluation of update availability 2.1.4. etcd backups before cluster updates etcd backups record the state of your cluster and all of its resource objects. You can use backups to attempt restoring the state of a cluster in disaster scenarios where you cannot recover a cluster in its currently dysfunctional state. In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced catastrophic conditions that cannot be fixed without reverting to the cluster version. etcd restorations might be destructive and destabilizing to a running cluster, use them only as a last resort. Warning Due to their high consequences, etcd restorations are not intended to be used as a rollback solution. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. There are several factors that affect the viability of an etcd restoration. For more information, see "Backing up etcd data" and "Restoring to a cluster state". Additional resources Backing up etcd Restoring to a cluster state 2.1.5. Best practices for cluster updates OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request. This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update. 2.1.5.1. Choose versions recommended by the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations based on cluster characteristics such as the cluster's subscribed channel. The Cluster Version Operator saves these recommendations as either recommended or conditional updates. While it is possible to attempt an update to a version that is not recommended by OSUS, following a recommended update path protects users from encountering known issues or unintended consequences on the cluster. Choose only update targets that are recommended by OSUS to ensure a successful update. 2.1.5.2. Address all critical alerts on the cluster Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster. In the Administrator perspective of the web console, navigate to Observe Alerting to find critical alerts. 2.1.5.2.1. Ensure that duplicated encoding headers are removed Before updating, you will receive a DuplicateTransferEncodingHeadersDetected alert if any route records a duplicate Transfer-Encoding header issue. This is due to the upgrade from HAProxy 2.2 in OpenShift Container Platform releases to HAProxy 2.6 in OpenShift Container Platform 4.14. Failing to address this alert will result in applications that send multiple Transfer-Encoding headers becoming unreachable through routes. To mitigate this issue, update any problematic applications to no longer send multiple Transfer-Encoding headers. For example, this could require removing duplicated headers in your application configuration file. For more information, see this Red Hat Knowledgebase article . 2.1.5.3. Ensure that the cluster is in an Upgradable state When one or more Operators have not reported their Upgradeable condition as True for more than an hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does not block patch updates, but you cannot perform a minor version update until you resolve this alert and all Operators report Upgradeable as True . For more information about the Upgradeable condition, see "Understanding cluster Operator condition types" in the additional resources section. 2.1.5.4. Ensure that enough spare nodes are available A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster's ability to perform an update with minimal disruption to cluster workloads. Depending on the configured value of the cluster's maxUnavailable spec, the cluster might not be able to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to another node while the first node is taken offline for an update. Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 2.1.5.5. Ensure that the cluster's PodDisruptionBudget is properly configured You can use the PodDisruptionBudget object to define the minimum number or percentage of pod replicas that must be available at any given time. This configuration protects workloads from disruptions during maintenance tasks such as cluster updates. However, it is possible to configure the PodDisruptionBudget for a given topology in a way that prevents nodes from being drained and updated during a cluster update. When planning a cluster update, check the configuration of the PodDisruptionBudget object for the following factors: For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the PodDisruptionBudget . For workloads that aren't highly available, make sure they are either not protected by a PodDisruptionBudget or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. Additional resources Understanding cluster Operator condition types 2.2. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 2.2.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 2.2.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 2.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP) and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. IBM Cloud and Nutanix Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-term credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Extracting and preparing credentials request resources About the Cloud Credential Operator 2.2.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.2. Extracting and preparing credentials request resources Before updating a cluster that uses the Cloud Credential Operator (CCO) in manual mode, you must extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Procedure Obtain the pull spec for the update that you want to apply by running the following command: USD oc adm upgrade The output of this command includes pull specs for the available updates similar to the following: Partial example output ... Recommended updates: VERSION IMAGE 4.14.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 ... Set a USDRELEASE_IMAGE variable with the release image that you want to use by running the following command: USD RELEASE_IMAGE=<update_pull_spec> where <update_pull_spec> is the pull spec for the release image that you want to use. For example: quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires for the target release. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 1 This field indicates the namespace which must exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace by running the following command: USD oc create namespace <component_namespace> steps If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), configure the ccoctl utility for a cluster update and use it to update your cloud provider resources. If your cluster was not configured with the ccoctl utility, manually update your cloud provider resources. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Manually updating cloud provider resources 2.2.3. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. You have extracted and configured the ccoctl binary from the release image. Procedure Use the ccoctl tool to process all CredentialsRequest objects by running the command for your cloud provider. The following commands process CredentialsRequest objects: Example 2.1. Amazon Web Services (AWS) USD ccoctl aws create-all \ 1 --name=<name> \ 2 --region=<aws_region> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> \ 5 --create-private-s3-bucket 6 1 To create the AWS resources individually, use the "Creating AWS resources individually" procedure in the "Installing a cluster on AWS with customizations" content. This option might be useful if you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization. 2 Specify the name used to tag any cloud resources that are created for tracking. 3 Specify the AWS region in which cloud resources will be created. 4 Specify the directory containing the files for the component CredentialsRequest objects. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 6 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Example 2.2. Google Cloud Platform (GCP) USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> 5 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. Example 2.3. IBM Cloud USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Example 2.4. Microsoft Azure USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 5 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 6 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the OIDC issuer URL from the existing cluster. You can obtain this value by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' 5 Specify the name of the resource group that contains the DNS zone. 6 Specify the Azure resource group name. You can obtain this value by running the following command: USD oc get infrastructure cluster \ -o jsonpath \ --template '{ .status.platformStatus.azure.resourceGroupName }' Example 2.5. Nutanix USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster by running the following command: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Indicating that the cluster is ready to upgrade 2.2.5. Manually updating cloud provider resources Before upgrading a cluster with manually maintained credentials, you must create secrets for any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Create YAML files with secrets for any CredentialsRequest custom resources that the new release image adds. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Example 2.6. Sample AWS YAML files Sample AWS CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample AWS Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Example 2.7. Sample Azure YAML files Note Global Azure and Azure Stack Hub use the same CredentialsRequest object and secret formats. Sample Azure CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Azure Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Example 2.8. Sample GCP YAML files Sample GCP CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample GCP Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for Azure Stack Hub Manually creating long-term credentials for GCP Indicating that the cluster is ready to upgrade 2.2.6. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade. 2.3. Preflight validation for Kernel Module Management (KMM) Modules Before performing an upgrade on the cluster with applied KMM modules, you must verify that kernel modules installed using KMM are able to be installed on the nodes after the cluster upgrade and possible kernel upgrade. Preflight attempts to validate every Module loaded in the cluster, in parallel. Preflight does not wait for validation of one Module to complete before starting validation of another Module . 2.3.1. Validation kickoff Preflight validation is triggered by creating a PreflightValidationOCP resource in the cluster. This spec contains two fields: releaseImage Mandatory field that provides the name of the release image for the OpenShift Container Platform version the cluster is upgraded to. pushBuiltImage If true , then the images created during the Build and Sign validation are pushed to their repositories. This field is false by default. 2.3.2. Validation lifecycle Preflight validation attempts to validate every module loaded in the cluster. Preflight stops running validation on a Module resource after the validation is successful. If module validation fails, you can change the module definitions and Preflight tries to validate the module again in the loop. If you want to run Preflight validation for an additional kernel, then you should create another PreflightValidationOCP resource for that kernel. After all the modules have been validated, it is recommended to delete the PreflightValidationOCP resource. 2.3.3. Validation status A PreflightValidationOCP resource reports the status and progress of each module in the cluster that it attempts or has attempted to validate in its .status.modules list. Elements of that list contain the following fields: lastTransitionTime The last time the Module resource status transitioned from one status to another. This should be when the underlying status has changed. If that is not known, then using the time when the API field changed is acceptable. name The name of the Module resource. namespace The namespace of the Module resource. statusReason Verbal explanation regarding the status. verificationStage Describes the validation stage being executed: image : Image existence verification build : Build process verification sign : Sign process verification verificationStatus The status of the Module verification: true : Verified false : Verification failed error : Error during the verification process unknown : Verification has not started 2.3.4. Preflight validation stages per Module Preflight runs the following validations on every KMM Module present in the cluster: Image validation stage Build validation stage Sign validation stage 2.3.4.1. Image validation stage Image validation is always the first stage of the preflight validation to be executed. If image validation is successful, no other validations are run on that specific module. Image validation consists of two stages: Image existence and accessibility. The code tries to access the image defined for the upgraded kernel in the module and get its manifests. Verify the presence of the kernel module defined in the Module in the correct path for future modprobe execution. If this validation is successful, it probably means that the kernel module was compiled with the correct Linux headers. The correct path is <dirname>/lib/modules/<upgraded_kernel>/ . 2.3.4.2. Build validation stage Build validation is executed only when image validation has failed and there is a build section in the Module that is relevant for the upgraded kernel. Build validation attempts to run the build job and validate that it finishes successfully. Note You must specify the kernel version when running depmod , as shown here: USD RUN depmod -b /opt USD{KERNEL_VERSION} If the PushBuiltImage flag is defined in the PreflightValidationOCP custom resource (CR), it also tries to push the resulting image into its repository. The resulting image name is taken from the definition of the containerImage field of the Module CR. Note If the sign section is defined for the upgraded kernel, then the resulting image will not be the containerImage field of the Module CR, but a temporary image name, because the resulting image should be the product of Sign flow. 2.3.4.3. Sign validation stage Sign validation is executed only when image validation has failed. There is a sign section in the Module resource that is relevant for the upgrade kernel, and build validation finishes successfully in case there was a build section in the Module relevant for the upgraded kernel. Sign validation attempts to run the sign job and validate that it finishes successfully. If the PushBuiltImage flag is defined in the PreflightValidationOCP CR, sign validation also tries to push the resulting image to its registry. The resulting image is always the image defined in the ContainerImage field of the Module . The input image is either the output of the Build stage, or an image defined in the UnsignedImage field. Note If a build section exists, the sign section input image is the build section's output image. Therefore, in order for the input image to be available for the sign section, the PushBuiltImage flag must be defined in the PreflightValidationOCP CR. 2.3.5. Example PreflightValidationOCP resource This section shows an example of the PreflightValidationOCP resource in the YAML format. The example verifies all of the currently present modules against the upcoming kernel version included in the OpenShift Container Platform release 4.11.18, which the following release image points to: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 Because .spec.pushBuiltImage is set to true , KMM pushes the resulting images of Build/Sign in to the defined repositories. apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true | [
"oc get apirequestcounts",
"NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H csistoragecapacities.v1.storage.k8s.io 14 380 csistoragecapacities.v1beta1.storage.k8s.io 1.27 0 16 custompolicydefinitions.v1beta1.capabilities.3scale.net 8 158 customresourcedefinitions.v1.apiextensions.k8s.io 1407 30148",
"oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!=\"\")]}{.status.removedInRelease}{\"\\t\"}{.metadata.name}{\"\\n\"}{end}'",
"1.27 csistoragecapacities.v1beta1.storage.k8s.io 1.29 flowschemas.v1beta2.flowcontrol.apiserver.k8s.io 1.29 prioritylevelconfigurations.v1beta2.flowcontrol.apiserver.k8s.io",
"oc get apirequestcounts <resource>.<version>.<group> -o yaml",
"oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o yaml",
"oc get apirequestcounts csistoragecapacities.v1beta1.storage.k8s.io -o jsonpath='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{\",\"}{.username}{\",\"}{.userAgent}{\"\\n\"}{end}' | sort -k 2 -t, -u | column -t -s, -NVERBS,USERNAME,USERAGENT",
"VERBS USERNAME USERAGENT list watch system:kube-controller-manager cluster-policy-controller/v0.0.0 list watch system:kube-controller-manager kube-controller-manager/v1.26.5+0abcdef list watch system:kube-scheduler kube-scheduler/v1.26.5+0abcdef",
"oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.13-kube-1.27-api-removals-in-4.14\":\"true\"}}' --type=merge",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc adm upgrade",
"Recommended updates: VERSION IMAGE 4.14.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"RELEASE_IMAGE=<update_pull_spec>",
"quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>",
"RUN depmod -b /opt USD{KERNEL_VERSION}",
"quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863",
"apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/updating_clusters/preparing-to-update-a-cluster |
7.3. Configuring HugeTLB Huge Pages | 7.3. Configuring HugeTLB Huge Pages Starting with Red Hat Enterprise Linux 7.1, there are two ways of reserving huge pages: at boot time and at run time . Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. However, on NUMA machines, the number of pages is automatically split among NUMA nodes. The run-time method allows you to reserve huge pages per NUMA node. If the run-time reservation is done as early as possible in the boot process, the probability of memory fragmentation is lower. 7.3.1. Configuring Huge Pages at Boot Time To configure huge pages at boot time, add the following parameters to the kernel boot command line: hugepages Defines the number of persistent huge pages configured in the kernel at boot time. The default value is 0. It is only possible to allocate huge pages if there are sufficient physically contiguous free pages in the system. Pages reserved by this parameter cannot be used for other purposes. This value can be adjusted after boot by changing the value of the /proc/sys/vm/nr_hugepages file. In a NUMA system, huge pages assigned with this parameter are divided equally between nodes. You can assign huge pages to specific nodes at runtime by changing the value of the node's /sys/devices/system/node/ node_id /hugepages/hugepages-1048576kB/nr_hugepages file. For more information, read the relevant kernel documentation, which is installed in /usr/share/doc/kernel-doc-kernel_version/Documentation/vm/hugetlbpage.txt by default. hugepagesz Defines the size of persistent huge pages configured in the kernel at boot time. Valid values are 2 MB and 1 GB. The default value is 2 MB. default_hugepagesz Defines the default size of persistent huge pages configured in the kernel at boot time. Valid values are 2 MB and 1 GB. The default value is 2 MB. For details of how to add parameters to the kernel boot command line, see Chapter 3. Listing of kernel parameters and values in the Red Hat Enterprise Linux 7 Kernel Administration Guide. Procedure 7.1. Reserving 1 GB Pages During Early Boot The page size the HugeTLB subsystem supports depends on the architecture. On the AMD64 and Intel 64 architecture, 2 MB huge pages and 1 GB gigantic pages are supported. Create a HugeTLB pool for 1 GB pages by appending the following line to the kernel command-line options in the /etc/default/grub file as root: Regenerate the GRUB2 configuration using the edited default file. If your system uses BIOS firmware, execute the following command: On a system with UEFI firmware, execute the following command: Create a file named /usr/lib/systemd/system/hugetlb-gigantic-pages.service with the following content: Create a file named /usr/lib/systemd/hugetlb-reserve-pages.sh with the following content: On the last line, replace number_of_pages with the number of 1GB pages to reserve and node with the name of the node on which to reserve these pages. Example 7.1. Reserving Pages on node0 and node1 For example, to reserve two 1GB pages on node0 and one 1GB page on node1 , replace the last line with the following code: You can modify it to your needs or add more lines to reserve memory in other nodes. Make the script executable: Enable early boot reservation: Note You can try reserving more 1GB pages at runtime by writing to nr_hugepages at any time. However, to prevent failures due to memory fragmentation, reserve 1GB pages early during the boot process. 7.3.2. Configuring Huge Pages at Run Time Use the following parameters to influence huge page behavior at run time: /sys/devices/system/node/ node_id /hugepages/hugepages- size /nr_hugepages Defines the number of huge pages of the specified size assigned to the specified NUMA node. This is supported as of Red Hat Enterprise Linux 7.1. The following example moves adds twenty 2048 kB huge pages to node2 . /proc/sys/vm/nr_overcommit_hugepages Defines the maximum number of additional huge pages that can be created and used by the system through overcommitting memory. Writing any non-zero value into this file indicates that the system obtains that number of huge pages from the kernel's normal page pool if the persistent huge page pool is exhausted. As these surplus huge pages become unused, they are then freed and returned to the kernel's normal page pool. | [
"default_hugepagesz=1G hugepagesz=1G",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"[Unit] Description=HugeTLB Gigantic Pages Reservation DefaultDependencies=no Before=dev-hugepages.mount ConditionPathExists=/sys/devices/system/node ConditionKernelCommandLine=hugepagesz=1G [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/hugetlb-reserve-pages.sh [Install] WantedBy=sysinit.target",
"#!/bin/sh nodes_path=/sys/devices/system/node/ if [ ! -d USDnodes_path ]; then echo \"ERROR: USDnodes_path does not exist\" exit 1 fi reserve_pages() { echo USD1 > USDnodes_path/USD2/hugepages/hugepages-1048576kB/nr_hugepages } reserve_pages number_of_pages node",
"reserve_pages 2 node0 reserve_pages 1 node1",
"chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh",
"systemctl enable hugetlb-gigantic-pages",
"numastat -cm | egrep 'Node|Huge' Node 0 Node 1 Node 2 Node 3 Total add AnonHugePages 0 2 0 8 10 HugePages_Total 0 0 0 0 0 HugePages_Free 0 0 0 0 0 HugePages_Surp 0 0 0 0 0 echo 20 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages numastat -cm | egrep 'Node|Huge' Node 0 Node 1 Node 2 Node 3 Total AnonHugePages 0 2 0 8 10 HugePages_Total 0 0 40 0 40 HugePages_Free 0 0 40 0 40 HugePages_Surp 0 0 0 0 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-memory-configuring-huge-pages |
Release notes for Eclipse Temurin 11.0.18 | Release notes for Eclipse Temurin 11.0.18 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.18/index |
Chapter 28. Integrating RHEL systems directly with AD using RHEL System Roles | Chapter 28. Integrating RHEL systems directly with AD using RHEL System Roles With the ad_integration System Role, you can automate a direct integration of a RHEL system with Active Directory (AD) using Red Hat Ansible Automation Platform. This chapter covers the following topics: The ad_integration System Role Variables for the ad_integration RHEL System Role Connecting a RHEL system directly to AD using the ad_integration System Role 28.1. The ad_integration System Role Using the ad_integration System Role, you can directly connect a RHEL system to Active Directory (AD). The role uses the following components: SSSD to interact with the central identity and authentication source realmd to detect available AD domains and configure the underlying RHEL system services, in this case SSSD, to connect to the selected AD domain Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. Additional resources Connecting RHEL systems directly to AD using SSSD . 28.2. Variables for the ad_integration RHEL System Role The ad_integration RHEL System Role uses the following parameters: Role Variable Description ad_integration_realm Active Directory realm, or domain name to join. ad_integration_password The password of the user used to authenticate with when joining the machine to the realm. Do not use plain text. Instead, use Ansible Vault to encrypt the value. ad_integration_manage_crypto_policies If true , the ad_integration role will use fedora.linux_system_roles.crypto_policies as needed. Default: false ad_integration_allow_rc4_crypto If true , the ad_integration role will set the crypto policy to allow RC4 encryption. Providing this variable automatically sets ad_integration_manage_crypto_policies to true . Default: false ad_integration_timesync_source Hostname or IP address of time source to synchronize the system clock with. Providing this variable automatically sets ad_integration_manage_timesync to true . Additional resources The /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file. 28.3. Connecting a RHEL system directly to AD using the ad_integration System Role You can use the ad_integration System Role to configure a direct integration between a RHEL system and an AD domain by running an Ansible playbook. Note Starting with RHEL8, RHEL no longer supports RC4 encryption by default. If it is not possible to enable AES in the AD domain, you must enable the AD-SUPPORT crypto policy and allow RC4 encryption in the playbook. Important Time between the RHEL server and AD must be synchronized. You can ensure this by using the timesync System Role in the playbook. In this example, the RHEL system joins the domain.example.com AD domain, using the AD Administrator user and the password for this user stored in the Ansible vault. The playbook also sets the AD-SUPPORT crypto policy and allows RC4 encryption. To ensure time synchronization between the RHEL system and AD, the playbook sets the adserver.domain.example.com server as the timesync source. Prerequisites Access and permissions to one or more managed nodes . Access and permissions to a control node . On the control node: Red Hat Ansible Engine is installed. The rhel-system-roles package is installed. An inventory file which lists the managed nodes. The following ports on the AD domain controllers are open and accessible from the RHEL server: Table 28.1. Ports Required for Direct Integration of Linux Systems into AD Using the ad_integration System Role Source Port Destination Port Protocol Service 1024:65535 53 UDP and TCP DNS 1024:65535 389 UDP and TCP LDAP 1024:65535 636 TCP LDAPS 1024:65535 88 UDP and TCP Kerberos 1024:65535 464 UDP and TCP Kerberos change/set password ( kadmin ) 1024:65535 3268 TCP LDAP Global Catalog 1024:65535 3269 TCP LDAP Global Catalog SSL/TLS 1024:65535 123 UDP NTP/Chrony (Optional) 1024:65535 323 UDP NTP/Chrony (Optional) Procedure Create a new ad_integration.yml file with the following content: --- - hosts: all vars: ad_integration_realm: "domain.example.com" ad_integration_password: !vault | vault encrypted password ad_integration_manage_crypto_policies: true ad_integration_allow_rc4_crypto: true ad_integration_timesync_source: "adserver.domain.example.com" roles: - linux-system-roles.ad_integration --- Optional: Verify playbook syntax. Run the playbook on your inventory file: Verification Display an AD user details, such as the administrator user: 28.4. Additional resources The /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file. man ansible-playbook(1) | [
"--- - hosts: all vars: ad_integration_realm: \"domain.example.com\" ad_integration_password: !vault | vault encrypted password ad_integration_manage_crypto_policies: true ad_integration_allow_rc4_crypto: true ad_integration_timesync_source: \"adserver.domain.example.com\" roles: - linux-system-roles.ad_integration ---",
"ansible-playbook --syntax-check ad_integration.yml -i inventory_file",
"ansible-playbook -i inventory_file /path/to/file/ad_integration.yml",
"getent passwd [email protected] [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/integrating-rhel-systems-directly-with-ad-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
Chapter 5. Console [operator.openshift.io/v1] | Chapter 5. Console [operator.openshift.io/v1] Description Console provides a means to configure an operator to manage the console. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleSpec is the specification of the desired behavior of the Console. status object ConsoleStatus defines the observed status of the Console. 5.1.1. .spec Description ConsoleSpec is the specification of the desired behavior of the Console. Type object Property Type Description customization object customization is used to optionally provide a small set of customization options to the web console. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". plugins array (string) plugins defines a list of enabled console plugin names. providers object providers contains configuration for using specific service providers. route object route contains hostname and secret reference that contains the serving certificate. If a custom route is specified, a new route will be created with the provided hostname, under which console will be available. In case of custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. In case of custom hostname points to an arbitrary domain, manual DNS configurations steps are necessary. The default console route will be maintained to reserve the default hostname for console if the custom route is removed. If not specified, default route will be used. DEPRECATED unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 5.1.2. .spec.customization Description customization is used to optionally provide a small set of customization options to the web console. Type object Property Type Description addPage object addPage allows customizing actions on the Add page in developer perspective. brand string brand is the default branding of the web console which can be overridden by providing the brand field. There is a limited set of specific brand options. This field controls elements of the console such as the logo. Invalid value will prevent a console rollout. customLogoFile object customLogoFile replaces the default OpenShift logo in the masthead and about dialog. It is a reference to a ConfigMap in the openshift-config namespace. This can be created with a command like 'oc create configmap custom-logo --from-file=/path/to/file -n openshift-config'. Image size must be less than 1 MB due to constraints on the ConfigMap size. The ConfigMap key should include a file extension so that the console serves the file with the correct MIME type. Recommended logo specifications: Dimensions: Max height of 68px and max width of 200px SVG format preferred customProductName string customProductName is the name that will be displayed in page titles, logo alt text, and the about dialog instead of the normal OpenShift product name. developerCatalog object developerCatalog allows to configure the shown developer catalog categories (filters) and types (sub-catalogs). documentationBaseURL string documentationBaseURL links to external documentation are shown in various sections of the web console. Providing documentationBaseURL will override the default documentation URL. Invalid value will prevent a console rollout. perspectives array perspectives allows enabling/disabling of perspective(s) that user can see in the Perspective switcher dropdown. perspectives[] object Perspective defines a perspective that cluster admins want to show/hide in the perspective switcher dropdown projectAccess object projectAccess allows customizing the available list of ClusterRoles in the Developer perspective Project access page which can be used by a project admin to specify roles to other users and restrict access within the project. If set, the list will replace the default ClusterRole options. quickStarts object quickStarts allows customization of available ConsoleQuickStart resources in console. 5.1.3. .spec.customization.addPage Description addPage allows customizing actions on the Add page in developer perspective. Type object Property Type Description disabledActions array (string) disabledActions is a list of actions that are not shown to users. Each action in the list is represented by its ID. 5.1.4. .spec.customization.customLogoFile Description customLogoFile replaces the default OpenShift logo in the masthead and about dialog. It is a reference to a ConfigMap in the openshift-config namespace. This can be created with a command like 'oc create configmap custom-logo --from-file=/path/to/file -n openshift-config'. Image size must be less than 1 MB due to constraints on the ConfigMap size. The ConfigMap key should include a file extension so that the console serves the file with the correct MIME type. Recommended logo specifications: Dimensions: Max height of 68px and max width of 200px SVG format preferred Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 5.1.5. .spec.customization.developerCatalog Description developerCatalog allows to configure the shown developer catalog categories (filters) and types (sub-catalogs). Type object Property Type Description categories array categories which are shown in the developer catalog. categories[] object DeveloperConsoleCatalogCategory for the developer console catalog. types object types allows enabling or disabling of sub-catalog types that user can see in the Developer catalog. When omitted, all the sub-catalog types will be shown. 5.1.6. .spec.customization.developerCatalog.categories Description categories which are shown in the developer catalog. Type array 5.1.7. .spec.customization.developerCatalog.categories[] Description DeveloperConsoleCatalogCategory for the developer console catalog. Type object Required id label Property Type Description id string ID is an identifier used in the URL to enable deep linking in console. ID is required and must have 1-32 URL safe (A-Z, a-z, 0-9, - and _) characters. label string label defines a category display label. It is required and must have 1-64 characters. subcategories array subcategories defines a list of child categories. subcategories[] object DeveloperConsoleCatalogCategoryMeta are the key identifiers of a developer catalog category. tags array (string) tags is a list of strings that will match the category. A selected category show all items which has at least one overlapping tag between category and item. 5.1.8. .spec.customization.developerCatalog.categories[].subcategories Description subcategories defines a list of child categories. Type array 5.1.9. .spec.customization.developerCatalog.categories[].subcategories[] Description DeveloperConsoleCatalogCategoryMeta are the key identifiers of a developer catalog category. Type object Required id label Property Type Description id string ID is an identifier used in the URL to enable deep linking in console. ID is required and must have 1-32 URL safe (A-Z, a-z, 0-9, - and _) characters. label string label defines a category display label. It is required and must have 1-64 characters. tags array (string) tags is a list of strings that will match the category. A selected category show all items which has at least one overlapping tag between category and item. 5.1.10. .spec.customization.developerCatalog.types Description types allows enabling or disabling of sub-catalog types that user can see in the Developer catalog. When omitted, all the sub-catalog types will be shown. Type object Required state Property Type Description disabled array (string) disabled is a list of developer catalog types (sub-catalogs IDs) that are not shown to users. Types (sub-catalogs) are added via console plugins, the available types (sub-catalog IDs) are available in the console on the cluster configuration page, or when editing the YAML in the console. Example: "Devfile", "HelmChart", "BuilderImage" If the list is empty or all the available sub-catalog types are added, then the complete developer catalog should be hidden. enabled array (string) enabled is a list of developer catalog types (sub-catalogs IDs) that will be shown to users. Types (sub-catalogs) are added via console plugins, the available types (sub-catalog IDs) are available in the console on the cluster configuration page, or when editing the YAML in the console. Example: "Devfile", "HelmChart", "BuilderImage" If the list is non-empty, a new type will not be shown to the user until it is added to list. If the list is empty the complete developer catalog will be shown. state string state defines if a list of catalog types should be enabled or disabled. 5.1.11. .spec.customization.perspectives Description perspectives allows enabling/disabling of perspective(s) that user can see in the Perspective switcher dropdown. Type array 5.1.12. .spec.customization.perspectives[] Description Perspective defines a perspective that cluster admins want to show/hide in the perspective switcher dropdown Type object Required id visibility Property Type Description id string id defines the id of the perspective. Example: "dev", "admin". The available perspective ids can be found in the code snippet section to the yaml editor. Incorrect or unknown ids will be ignored. pinnedResources array pinnedResources defines the list of default pinned resources that users will see on the perspective navigation if they have not customized these pinned resources themselves. The list of available Kubernetes resources could be read via kubectl api-resources . The console will also provide a configuration UI and a YAML snippet that will list the available resources that can be pinned to the navigation. Incorrect or unknown resources will be ignored. pinnedResources[] object PinnedResourceReference includes the group, version and type of resource visibility object visibility defines the state of perspective along with access review checks if needed for that perspective. 5.1.13. .spec.customization.perspectives[].pinnedResources Description pinnedResources defines the list of default pinned resources that users will see on the perspective navigation if they have not customized these pinned resources themselves. The list of available Kubernetes resources could be read via kubectl api-resources . The console will also provide a configuration UI and a YAML snippet that will list the available resources that can be pinned to the navigation. Incorrect or unknown resources will be ignored. Type array 5.1.14. .spec.customization.perspectives[].pinnedResources[] Description PinnedResourceReference includes the group, version and type of resource Type object Required group resource version Property Type Description group string group is the API Group of the Resource. Enter empty string for the core group. This value should consist of only lowercase alphanumeric characters, hyphens and periods. Example: "", "apps", "build.openshift.io", etc. resource string resource is the type that is being referenced. It is normally the plural form of the resource kind in lowercase. This value should consist of only lowercase alphanumeric characters and hyphens. Example: "deployments", "deploymentconfigs", "pods", etc. version string version is the API Version of the Resource. This value should consist of only lowercase alphanumeric characters. Example: "v1", "v1beta1", etc. 5.1.15. .spec.customization.perspectives[].visibility Description visibility defines the state of perspective along with access review checks if needed for that perspective. Type object Required state Property Type Description accessReview object accessReview defines required and missing access review checks. state string state defines the perspective is enabled or disabled or access review check is required. 5.1.16. .spec.customization.perspectives[].visibility.accessReview Description accessReview defines required and missing access review checks. Type object Property Type Description missing array missing defines a list of permission checks. The perspective will only be shown when at least one check fails. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the required access review list. missing[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface required array required defines a list of permission checks. The perspective will only be shown when all checks are successful. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the missing access review list. required[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 5.1.17. .spec.customization.perspectives[].visibility.accessReview.missing Description missing defines a list of permission checks. The perspective will only be shown when at least one check fails. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the required access review list. Type array 5.1.18. .spec.customization.perspectives[].visibility.accessReview.missing[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 5.1.19. .spec.customization.perspectives[].visibility.accessReview.required Description required defines a list of permission checks. The perspective will only be shown when all checks are successful. When omitted, the access review is skipped and the perspective will not be shown unless it is required to do so based on the configuration of the missing access review list. Type array 5.1.20. .spec.customization.perspectives[].visibility.accessReview.required[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 5.1.21. .spec.customization.projectAccess Description projectAccess allows customizing the available list of ClusterRoles in the Developer perspective Project access page which can be used by a project admin to specify roles to other users and restrict access within the project. If set, the list will replace the default ClusterRole options. Type object Property Type Description availableClusterRoles array (string) availableClusterRoles is the list of ClusterRole names that are assignable to users through the project access tab. 5.1.22. .spec.customization.quickStarts Description quickStarts allows customization of available ConsoleQuickStart resources in console. Type object Property Type Description disabled array (string) disabled is a list of ConsoleQuickStart resource names that are not shown to users. 5.1.23. .spec.providers Description providers contains configuration for using specific service providers. Type object Property Type Description statuspage object statuspage contains ID for statuspage.io page that provides status info about. 5.1.24. .spec.providers.statuspage Description statuspage contains ID for statuspage.io page that provides status info about. Type object Property Type Description pageID string pageID is the unique ID assigned by Statuspage for your page. This must be a public page. 5.1.25. .spec.route Description route contains hostname and secret reference that contains the serving certificate. If a custom route is specified, a new route will be created with the provided hostname, under which console will be available. In case of custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. In case of custom hostname points to an arbitrary domain, manual DNS configurations steps are necessary. The default console route will be maintained to reserve the default hostname for console if the custom route is removed. If not specified, default route will be used. DEPRECATED Type object Property Type Description hostname string hostname is the desired custom domain under which console will be available. secret object secret points to secret in the openshift-config namespace that contains custom certificate and key and needs to be created manually by the cluster admin. Referenced Secret is required to contain following key value pairs: - "tls.crt" - to specifies custom certificate - "tls.key" - to specifies private key of the custom certificate If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. 5.1.26. .spec.route.secret Description secret points to secret in the openshift-config namespace that contains custom certificate and key and needs to be created manually by the cluster admin. Referenced Secret is required to contain following key value pairs: - "tls.crt" - to specifies custom certificate - "tls.key" - to specifies private key of the custom certificate If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 5.1.27. .status Description ConsoleStatus defines the observed status of the Console. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 5.1.28. .status.conditions Description conditions is a list of conditions and their status Type array 5.1.29. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 5.1.30. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 5.1.31. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 5.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/consoles DELETE : delete collection of Console GET : list objects of kind Console POST : create a Console /apis/operator.openshift.io/v1/consoles/{name} DELETE : delete a Console GET : read the specified Console PATCH : partially update the specified Console PUT : replace the specified Console /apis/operator.openshift.io/v1/consoles/{name}/status GET : read status of the specified Console PATCH : partially update status of the specified Console PUT : replace status of the specified Console 5.2.1. /apis/operator.openshift.io/v1/consoles Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Console Table 5.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Console Table 5.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Console Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body Console schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 202 - Accepted Console schema 401 - Unauthorized Empty 5.2.2. /apis/operator.openshift.io/v1/consoles/{name} Table 5.9. Global path parameters Parameter Type Description name string name of the Console Table 5.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Console Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.12. Body parameters Parameter Type Description body DeleteOptions schema Table 5.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Console Table 5.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.15. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Console Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Patch schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Console Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body Console schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty 5.2.3. /apis/operator.openshift.io/v1/consoles/{name}/status Table 5.22. Global path parameters Parameter Type Description name string name of the Console Table 5.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Console Table 5.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 5.25. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Console Table 5.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.27. Body parameters Parameter Type Description body Patch schema Table 5.28. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Console Table 5.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.30. Body parameters Parameter Type Description body Console schema Table 5.31. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/console-operator-openshift-io-v1 |
Chapter 2. Differences from upstream OpenJDK 17 | Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.14/rn-openjdk-diff-from-upstream |
Subsets and Splits