title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 7. Configuring maximum memory usage for addresses
Chapter 7. Configuring maximum memory usage for addresses AMQ Broker transparently supports huge queues containing millions of messages, even if the machine that is hosting the broker is running with limited memory. In these situations, it might be not possible to store all of the queues in memory at any one time. To protect against excess memory consumption, you can configure the maximum memory usage that is allowed for each address on the broker. In addition, you can specify what action the broker takes when this limit is reached for a given address. In particular, when memory usage for an address reaches the configured limit, you can configure the broker to take one of the following actions: Page messages Silently drop messages Drop messages and notify the sending clients Block clients from sending messages The sections that follow show how to configure maximum memory usage for addresses and the corresponding actions that the broker can take when the limit for an address is reached. Important When you use transactions, the broker might allocate extra memory to ensure transactional consistency. In this case, the memory usage reported by the broker might not reflect the total number of bytes being used in memory. Therefore, if you configure the broker to page, drop, or block messages based on a specified maximum memory usage, you should not also use transactions. 7.1. Configuring message paging For any address that has a maximum memory usage limit specified, you can also specify what action the broker takes when that usage limit is reached. One of the options that you can configure is paging . If you configure the paging option, when the maximum size of an address is reached, the broker starts to store messages for that address on disk, in files known as page files . Each page file has a maximum size that you can configure. Each address that you configure in this way has a dedicated folder in your file system to store paged messages. Both queue browsers and consumers can navigate through page files when inspecting messages in a queue. However, a consumer that is using a very specific filter might not be able to consume a message that is stored in a page file until existing messages in the queue have been consumed first. For example, suppose that a consumer filter includes a string expression such as "color='red'" . If a message that meets this condition follows one million messages with the property "color='blue'" , the consumer cannot consume the message until those with "color='blue'" have been consumed first. The broker transfers (that is, depages ) messages from disk into memory when clients are ready to consume them. The broker removes a page file from disk when all messages in that file have been acknowledged. The procedures that follow show how to configure message paging. 7.1.1. Specifying a paging directory The following procedure shows how to specify the location of the paging directory. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the paging-directory element. Specify a location for the paging directory in your file system. <configuration ...> <core ...> ... <paging-directory> /path/to/paging-directory </paging-directory> ... </core> </configuration> For each address that you subsequently configure for paging, the broker adds a dedicated directory within the paging directory that you have specified. 7.1.2. Configuring an address for paging The following procedure shows how to configure an address for paging. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to specify maximum memory usage and define paging behavior. For example: <address-settings> <address-setting match="my.paged.address"> ... <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the policy specified for address-full-policy . The default value is -1 , which means that there is no limit. The value that you specify also supports byte notation such as "K", "MB", and "GB". page-size-bytes Size, in bytes, of each page file used on the paging system. The default value is 10485760 (that is, 10 MiB). The value that you specify also supports byte notation such as "K", "MB", and "GB". address-full-policy Action that the broker takes when then the maximum size for an address has been reached. The default value is PAGE . Valid values are: PAGE The broker pages any further messages to disk. DROP The broker silently drops any further messages. FAIL The broker drops any further messages and issues exceptions to client message producers. BLOCK Client message producers block when they try to send further messages. Additional paging configuration elements that are not shown in the preceding example are described below. page-max-cache-size Number of page files that the broker keeps in memory to optimize IO during paging navigation. The default value is 5 . page-sync-timeout Time, in nanoseconds, between periodic page synchronizations. If you are using an asynchronous IO journal (that is, journal-type is set to ASYNCIO in the broker.xml configuration file), the default value is 3333333 . If you are using a standard Java NIO journal (that is, journal-type is set to NIO ), the default value is the configured value of the journal-buffer-timeout parameter. In the preceding example , when messages sent to the address my.paged.address exceed 104857600 bytes in memory, the broker begins paging. Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . 7.1.3. Configuring a global paging size Sometimes, configuring a memory limit per address is not practical, for example, when a broker manages many addresses that have different usage patterns. In these situations, you can specify a global memory limit. The global limit is the total amount of memory that the broker can use for all addresses. When this memory limit is reached, the broker executes the policy specified for address-full-policy for the address associated with a new incoming message. The following procedure shows how to configure a global paging size. Prerequisites You should be familiar with how to configure an address for paging. For more information, see Section 7.1.2, "Configuring an address for paging" . Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the global-max-size element and specify a value. For example: <configuration> <core> ... <global-max-size>1GB</global-max-size> ... </core> </configuration> global-max-size Total amount of memory, in bytes, that the broker can use for all addresses. When this limit is reached, for the address associated with an incoming message, the broker executes the policy that is specified as a value for address-full-policy . The default value of global-max-size is half of the maximum memory available to the Java virtual machine (JVM) that is hosting the broker. The value for global-max-size is in bytes, but also supports byte notation (for example, "K", "Mb", "GB"). In the preceding example, the broker is configured to use a maximum of one gigabyte of available memory when processing messages. Start the broker. On Linux: On Windows: 7.1.4. Limiting disk usage during paging You can limit the amount of physical disk space that the broker can use before it blocks incoming messages rather than paging them. The following procedure shows how to set a limit for disk usage during paging. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element add the max-disk-usage configuration element and specify a value. For example: <configuration> <core> ... <max-disk-usage>50</max-disk-usage> ... </core> </configuration> max-disk-usage Maximum percentage of the available disk space that the broker can use when paging messages. When this limit is reached, the broker blocks incoming messages rather than paging them. The default value is 90 . In the preceding example, the broker is limited to using fifty percent of disk space when paging messages. Start the broker. On Linux: On Windows: 7.2. Configuring message dropping Section 7.1.2, "Configuring an address for paging" shows how to configure an address for paging. As part of that procedure, you set the value of address-full-policy to PAGE . To drop messages (rather than paging them) when an address reaches its specified maximum size, set the value of the address-full-policy to one of the following: DROP When the maximum size of a given address has been reached, the broker silently drops any further messages. FAIL When the maximum size of a given address has been reached, the broker drops any further messages and issues exceptions to producers. 7.3. Configuring message blocking The following procedures show how to configure message blocking when a given address reaches the maximum size limit that you have specified. Note You can configure message blocking only for the Core, OpenWire, and AMQP protocols. 7.3.1. Blocking Core and OpenWire producers The following procedure shows how to configure message blocking for Core and OpenWire message producers when a given address reaches the maximum size limit that you have specified. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to define message blocking behavior. For example: <address-settings> <address-setting match="my.blocking.address"> ... <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the policy specified for address-full-policy . The value that you specify also supports byte notation such as "K", "MB", and "GB". Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . address-full-policy Action that the broker takes when then the maximum size for an address has been reached. In the preceding example, when messages sent to the address my.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from Core or OpenWire message producers. 7.3.2. Blocking AMQP producers Protocols such as Core and OpenWire use a window-size flow control system. In this system, credits represent bytes and are allocated to producers. If a producer wants to send a message, the producer must wait until it has sufficient credits for the size of the message. By contrast, AMQP flow control credits do not represent bytes. Instead, AMQP credits represent the number of messages a producer is permitted to send, regardless of message size. Therefore, it is possible, in some situations, for AMQP producers to significantly exceed the max-size-bytes value of an address. Therefore, to block AMQP producers, you must use a different configuration element, max-size-bytes-reject-threshold . For a matching address or set of addresses, this element specifies the maximum size, in bytes, of all AMQP messages in memory. When the total size of all messages in memory reaches the specified limit, the broker blocks AMQP producers from sending further messages. The following procedure shows how to configure message blocking for AMQP message producers. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, specify the maximum size of all AMQP messages in memory. For example: <address-settings> <address-setting match="my.amqp.blocking.address"> ... <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> ... </address-setting> </address-settings> max-size-bytes-reject-threshold Maximum size, in bytes, of the memory allowed for the address before the broker blocks further AMQP messages. The value that you specify also supports byte notation such as "K", "MB", and "GB". By default, max-size-bytes-reject-threshold is set to -1 , which means that there is no maximum size. Note If you specify max-size-bytes-reject-threshold in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes-reject-threshold . In the preceding example, when messages sent to the address my.amqp.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from AMQP producers. 7.4. Understanding memory usage on multicast addresses When a message is routed to an address that has multicast queues bound to it, there is only one copy of the message in memory. Each queue has only a reference to the message. Because of this, the associated memory is released only after all queues referencing the message have delivered it. In this type of situation, if you have a slow consumer, the entire address might experience a negative performance impact. For example, consider this scenario: An address has ten queues that use the multicast routing type. Due to a slow consumer, one of the queues does not deliver its messages. The other nine queues continue to deliver messages and are empty. Messages continue to arrive to the address. The queue with the slow consumer continues to accumulate references to the messages, causing the broker to keep the messages in memory. When the maximum size of the address is reached, the broker starts to page messages. In this scenario because of a single slow consumer, consumers on all queues are forced to consume messages from the page system, requiring additional IO. Additional resources To learn how to configure flow control to regulate the flow of data between the broker and producers and consumers, see Flow control in the AMQ Core Protocol JMS documentation.
[ "<configuration ...> <core ...> <paging-directory> /path/to/paging-directory </paging-directory> </core> </configuration>", "<address-settings> <address-setting match=\"my.paged.address\"> <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings>", "<broker_instance_dir> /bin/artemis stop", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "<configuration> <core> <global-max-size>1GB</global-max-size> </core> </configuration>", "<broker_instance_dir> /bin/artemis run", "<broker_instance_dir> \\bin\\artemis-service.exe start", "<broker_instance_dir> /bin/artemis stop", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "<configuration> <core> <max-disk-usage>50</max-disk-usage> </core> </configuration>", "<broker_instance_dir> /bin/artemis run", "<broker_instance_dir> \\bin\\artemis-service.exe start", "<address-settings> <address-setting match=\"my.blocking.address\"> <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> </address-setting> </address-settings>", "<address-settings> <address-setting match=\"my.amqp.blocking.address\"> <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> </address-setting> </address-settings>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/assembly-br-configuring-maximum-memory-usage-for-addresses_configuring
Managing hybrid and multicloud resources
Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.17 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 2.3. Support of Multicloud Object Gateway data bucket APIs The following table lists the Multicloud Object Gateway (MCG) data bucket APIs and their support levels. Data buckets Support List buckets Supported Delete bucket Supported Replication configuration is part of MCG bucket class configuration Create bucket Supported A different set of canned ACLs Post bucket Not supported Put bucket Partially supported Replication configuration is part of MCG bucket class configuration Bucket lifecycle Partially supported Object expiration only Policy (Buckets, Objects) Partially supported Bucket policies are supported Bucket Website Supported Bucket ACLs (Get, Put) Supported A different set of canned ACLs Bucket Location Partialy Returns a default value only Bucket Notification Not supported Bucket Object Versions Supoorted Get Bucket Info (HEAD) Supported Bucket Request Payment Partially supported Returns the bucket owner Put Object Supported Delete Object Supported Get Object Supported Object ACLs (Get, Put) Supported Get Object Info (HEAD) Supported POST Object Supported Copy Object Supported Multipart Uploads Supported Object Tagging Supported Storage Class Not supported Note No support for cors, metrics, inventory, analytics, inventory, logging, notifications, accelerate, replication, request payment, locks verbs Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab to view all the backing stores. 3.2. Overriding the default backing store You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance. Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Check if noobaa-default-backing-store is present: Patch the NooBaa CR to enable manualDefaultBackingStore : Important Use the Multicloud Object Gateway CLI to create a new backing store and update accounts. Create a new default backing store to override the default backing store. For example: Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store. Update the admin account to use the new default backing store as its default resource: Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the step. Updating the default resource for admin accounts ensures that the new configuration is used throughout your system. Configure the default-bucketclass to use the new default backingstore: Optional: Delete the noobaa-default-backing-store. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource. Delete the noobaa-default-backing-store: You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition. 3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.3.1, "Creating an AWS-backed backingstore" For creating an AWS-STS-backed backingstore, see Section 3.3.2, "Creating an AWS-STS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.3.3, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.3.4, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.3.5, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.3.6, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.4, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.3.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 3.3.2. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 3.3.2.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 3.3.2.2. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 3.3.2.3. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud 3.3.3. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.4. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.3.5. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.6. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.4. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab and search the new Bucket Class. 3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage -> Object Storage -> Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Setting Multicloud Object Gateway account credentials using CLI command You can update and verify the Multicloud Object Gateway (MCG) account credentials manually by using the MCG CLI command. Prerequisites Ensure that the following prerequisites are met: A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To update the MCG account credentials, run the following command: Example: Example output: Credential complexity requirements: Access key The account access key must be 20 characters in length and it must contain only alphanumeric characters. Secret key The secret key must be 40 characters in length and it must contain alphanumeric characters and "+", "/". For example: To verify the credentials, run the following command: Note You cannot have a duplicate access-key. Each user must have a unique access-key and secret-key . 5.1.3. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.4. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG: Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes. You can set a bucket class replication policy for the Placement and Namespace bucket classes. Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. You can pass many backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. You can pass many backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . 8.3. Enabling log based bucket replication When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data. Important This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Note This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication. 8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets. Prerequisites Ensure that object logging is enabled in AWS. For more information, see the "Using the S3 console" section in Enabling Amazon S3 server access logging . Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create ObjectBucketClaim . Enter the name of ObjectBucketName and select StorageClass and BucketClass. Select the Enable replication check box to enable replication. In the Replication policy section, select the Optimize replication using event logs checkbox. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands. Procedure Edit the YAML of the bucket's OBC to enable log based bucket replication. Add the following under spec : Note It is also possible to add this to the YAML of an OBC before it is created. rule_id Specify an ID of your choice for identifying the rule destination_bucket Specify the name of the target MCG bucket that the objects are copied to (optional) {"filter": {"prefix": <>}} Specify a prefix string that you can set to filter the objects that are replicated log_replication_info Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs. 8.3.3. Enabling log based bucket replication in Microsoft Azure Prerequisites Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal: Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID. For information, see Register an application . Ensure that a new a new client secret is created and the application secret is noted down. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down. For information, see Create a Log Analytics workspace . Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the step is provided. For more information, see Assign Azure roles using the Azure portal . Ensure that a new storage account is created and the Access keys are noted down. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete , and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added. For more information, see Diagnostic settings in Azure Monitor . Ensure that two new containers for object source and object destination are created. Administrator access to OpenShift Web Console. Procedure Create a secret with credentials to be used by the namespacestores . Create a NamespaceStore backed by a container created in Azure. For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface . Create a new Namespace-Bucketclass and OBC that utilizes it. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls . Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec : sync_deletion Specify a boolean value, true or false . destination_bucket Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC's YAML. Verification steps Write objects to the source bucket. Wait until MCG replicates them. Delete the objects from the source bucket. Verify the objects were removed from the target bucket. 8.3.4. Enabling log-based bucket replication deletion Prerequisites Administrator access to OpenShift Web Console. AWS Server Access Logging configured for the desired bucket. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create new Object bucket claim . (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.4. Bucket logging for Multicloud Object Gateway Bucket logging helps you to record the S3 operations that are performed against the Multicloud Object Gateway (MCG) bucket for compliance, auditing, and optimization purposes. Bucket logging supports the following two options: Best-effort - Bucket logging is recorded using UDP on the best effort basis Guaranteed - Bucket logging with this option creates a PVC attached to the MCG pods and saves the logs to this PVC on a Guaranteed basis, and then from the PVC to the log buckets. Using this option logging takes place twice for every S3 operation as follows: At the start of processing the request At the end with the result of the S3 operation 8.4.1. Enabling bucket logging for Multicloud Object Gateway using the Best-effort option Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to MCG. For information, see Accessing the Multicloud Object Gateway with your applications . Procedure Create a data bucket where you can upload the objects. Create a log bucket where you want to store the logs for bucket operations by using the following command: Configure bucket logging on data bucket with log bucket in one of the following ways: Using the NooBaa API Using the S3 API Create a file called setlogging.json in the following format: Run the following command: Verify if the bucket logging is set for the data bucket in one of the following ways: Using the NooBaa API Using the S3 API The S3 operations can take up to 24 hours to get recorded in the logs bucket. The following example shows the recorded logs and how to download them: Example (Optional) To disable bucket logging, use the following command: 8.4.2. Enabling bucket logging using the Guaranteed option Procedure Enable Guaranteed bucket logging using the NooBaa CR in one of the following ways: Using the default CephFS storage class update the NooBaa CR spec: Using the RWX PVC that you created: Note Make sure that the PVC supports RWX Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions Chapter 12. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 12.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . 12.2. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Backing Store . Select the relevant backing store and click on YAML. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory. Example reference: Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . Chapter 14. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert . 14.1. Accessing External RGW server in OpenShift Data Foundation Accessing External RGW server using Object Bucket Claims The S3 credentials such as AccessKey or Secret Key is stored in the secret generated by the Object Bucket Claim (OBC) creation and you can fetch the same by using the following commands: Similarly, you can fetch the endpoint details from the configmap of OBC: Accessing External RGW server using the Ceph Object Store User CR You can fetch the S3 Credentials and endpoint details from the secret generated as part of the Ceph Object Store User CR: Important For both the access mechanisms, you can either request for new certificates from the administrator or reuse the certificates from the Kubernetes secret, ceph-rgw-tls-cert . Chapter 15. Using the Multicloud Object Gateway's Security Token Service to assume the role of another user Multicloud Object Gateway (MCG) provides support to a security token service (STS) similar to the one provided by Amazon Web Services. To allow other users to assume the role of a certain user, you need to assign a role configuration to the user. You can manage the configuration of roles using the MCG CLI tool. The following example shows role configuration that allows two MCG users ( [email protected] and [email protected] ) to assume a certain user's role: Assign the role configuration by using the MCG CLI tool. Collect the following information before proceeding to assume the role as it is needed for the subsequent steps: The access key ID and secret access key of the assumer (the user who assumes the role) The MCG STS endpoint, which can be retrieved by using the command: The access key ID of the assumed user. The value of the role_name value in your role configuration. A name of your choice for the role session After the configuration role is ready, assign it to the appropriate user (fill with the data described in the step) - Note Adding --no-verify-ssl might be necessary depending on your cluster's configuration. The resulting output contains the access key ID, secret access key, and session token that can be used for executing actions while assuming the other user's role. You can use the credentials generated after the assume role steps as shown in the following example:
[ "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls", "oc get backingstore NAME TYPE PHASE AGE noobaa-default-backing-store pv-pool Creating 102s", "oc patch noobaa/noobaa --type json --patch='[{\"op\":\"add\",\"path\":\"/spec/manualDefaultBackingStore\",\"value\":true}]'", "noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16", "noobaa account update [email protected] --new_default_resource=_NEW-DEFAULT-BACKING-STORE_", "oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{\"op\": \"replace\", \"path\": \"/spec/placementPolicy/tiers/0/backingStores/0\", \"value\": \"NEW-DEFAULT-BACKING-STORE\"}]'", "oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-core\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }", "#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"noobaa\" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2=\"noobaa-endpoint\" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3=\"noobaa-core\" # The service account name of statefulset core AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_3}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"", "noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0", "noobaa account passwd <noobaa_account_name> [options]", "noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account passwd [email protected]", "Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"", "-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***", "noobaa account credentials <noobaa-account-name> [options]", "noobaa account credentials [email protected]", "noobaa account credentials [email protected] Enter access-key: [got 20 characters] Enter secret-key: [got 40 characters] INFO[0026] ❌ Not Found: NooBaaAccount \"[email protected]\" INFO[0026] ✅ Exists: NooBaa \"noobaa\" INFO[0026] ✅ Exists: Service \"noobaa-mgmt\" INFO[0026] ✅ Exists: Secret \"noobaa-operator\" INFO[0026] ✅ Exists: Secret \"noobaa-admin\" INFO[0026] ✈\\ufe0f RPC: account.update_account_keys() Request: {Email:[email protected] AccessKeys:{AccessKey: * SecretKey: }} WARN[0026] RPC: GetConnection creating connection to wss://localhost:33495/rpc/ 0xc000cd9980 INFO[0026] RPC: Connecting websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] RPC: Connected websocket (0xc000cd9980) &{RPC:0xc0001655e0 Address:wss://localhost:33495/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0026] ✅ RPC: account.update_account_keys() Response OK: took 42.7ms INFO[0026] ✈\\ufe0f RPC: account.read_account() Request: {Email:[email protected]} INFO[0026] ✅ RPC: account.read_account() Response OK: took 2.0ms INFO[0026] ✅ Updated: \"noobaa-admin\" INFO[0026] ✅ Successfully updated s3 credentials for the account \"[email protected]\" INFO[0026] ✅ Exists: Secret \"noobaa-admin\" Connection info: AWS_ACCESS_KEY_ID : AWS_SECRET_ACCESS_KEY : *", "noobaa account credentials my-account --access-key=ABCDEF1234567890ABCD --secret-key=ABCDE12345+FGHIJ67890/KLMNOPQRSTUV123456", "noobaa account status <noobaa-account-name> --show-secrets", "noobaa account list", "NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE account-test [*] noobaa-default-backing-store Ready 14m17s test2 [first.bucket] noobaa-default-backing-store Ready 3m12s", "oc get noobaaaccount", "NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s", "noobaa account regenerate <noobaa_account_name> [options]", "noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account regenerate account-test", "INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***", "noobaa obc list", "NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound", "oc get obc", "NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s", "noobaa obc regenerate <bucket_claim_name> [options]", "noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa obc regenerate obc-test", "INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991", "oc edit noobaa -n openshift-storage noobaa", "spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"", "oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'", "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws", "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: {\"rules\": [{ \"rule_id\": \"\", \"destination_bucket\": \"\", \"filter\": {\"prefix\": \"\"}}]}", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "replicationPolicy: '{\"rules\":[{\"rule_id\":\"<RULE ID>\", \"destination_bucket\":\"<DEST>\", \"filter\": {\"prefix\": \"<PREFIX>\"}}], \"log_replication_info\": {\"logs_location\": {\"logs_bucket\": \"<LOGS_BUCKET>\"}}}'", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: TenantID: <AZURE TENANT ID ENCODED IN BASE64> ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64> ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64> LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64> AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "replicationPolicy:'{\"rules\":[ {\"rule_id\":\"ID goes here\", \"sync_deletions\": \"<true or false>\"\", \"destination_bucket\":object bucket name\"} ], \"log_replication_info\":{\"endpoint_type\":\"AZURE\"}}'", "nb bucket create data.bucket", "nb bucket create log.bucket", "nb api bucket_api put_bucket_logging '{ \"name\": \"data.bucket\", \"log_bucket\": \"log.bucket\", \"log_prefix\": \"data-bucket-logs\" }'", "alias s3api_alias='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3api'", "{ \"LoggingEnabled\": { \"TargetBucket\": \"<log-bucket-name>\", \"TargetPrefix\": \"<prefix/empty-string>\" } }", "s3api_alias put-bucket-logging --endpoint <ep> --bucket <source-bucket> --bucket-logging-status file://setlogging.json --no-verify-ssl", "nb api bucket_api get_bucket_logging '{ \"name\": \"data.bucket\" }'", "s3api_alias get-bucket-logging --no-verify-ssl --endpoint <ep> --bucket <source-bucket>", "s3_alias cp s3://logs.bucket/data-bucket-logs/logs.bucket.bucket_data-bucket-logs_1719230150.log - | tail -n 2 Jun 24 14:00:02 10-XXX-X-XXX.sts.openshift-storage.svc.cluster.local {\"noobaa_bucket_logging\":\"true\",\"op\":\"GET\",\"bucket_owner\":\"[email protected]\",\"source_bucket\":\"data.bucket\",\"object_key\":\"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url\",\"log_bucket\":\"logs.bucket\",\"remote_ip\":\"100.XX.X.X\",\"request_uri\":\"/data.bucket?list-type=2&prefix=data-bucket-logs&delimiter=%2F&encoding-type=url\",\"request_id\":\"luv2XXXX-ctyg2k-12gs\"} Jun 24 14:00:06 10-XXX-X-XXX.s3.openshift-storage.svc.cluster.local {\"noobaa_bucket_logging\":\"true\",\"op\":\"PUT\",\"bucket_owner\":\"[email protected]\",\"source_bucket\":\"data.bucket\",\"object_key\":\"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file\",\"log_bucket\":\"logs.bucket\",\"remote_ip\":\"100.XX.X.X\",\"request_uri\":\"/data.bucket/B69EC83F-0177-44D8-A8D1-4A10C5A5AB0F.file\",\"request_id\":\"luv2XXXX-9syea5-x5z\"}", "nb api bucket_api delete_bucket_logging '{ \"name\": \"data.bucket\" }'", "bucketLogging: { loggingType: guaranteed }", "bucketLogging: { loggingType: guaranteed bucketLoggingPVC: <pvc-name> }", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'", "spec: pvPool: resources: limits: cpu: 1000m memory: 4000Mi requests: cpu: 800m memory: 800Mi storage: 50Gi", "oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d", "oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d", "oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode oc get secret <object bucket claim name> -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode", "oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_HOST}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_PORT}' oc get cm <object bucket claim name> -o jsonpath='{.data.BUCKET_NAME}'", "oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.AccessKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.SecretKey}' | base64 --decode oc get secret rook-ceph-object-user-<object-store-cr-name>-<object-user-cr-name> -o jsonpath='{.data.Endpoint}' | base64 --decode", "'{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'", "mcg sts assign-role --email <assumed user's username> --role_config '{\"role_name\": \"AllowTwoAssumers\", \"assume_role_policy\": {\"version\": \"2012-10-17\", \"statement\": [ {\"action\": [\"sts:AssumeRole\"], \"effect\": \"allow\", \"principal\": [\"[email protected]\", \"[email protected]\"]}]}}'", "oc -n openshift-storage get route", "AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> aws --endpoint-url <mcg-sts-endpoint> sts assume-role --role-arn arn:aws:sts::<assumed-user-access-key-id>:role/<role-name> --role-session-name <role-session-name>", "AWS_ACCESS_KEY_ID=<aws-access-key-id> AWS_SECRET_ACCESS_KEY=<aws-secret-access-key1> AWS_SESSION_TOKEN=<session token> aws --endpoint-url <mcg-s3-endpoint> s3 ls" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/managing_hybrid_and_multicloud_resources/setting-a-bucket-class-replication-policy_rhodf
Chapter 1. Preparing to install on Azure Stack Hub
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
9.8.3. File Permissions
9.8.3. File Permissions Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on the client system can use the su - command to access any files with the NFS share. By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat recommends that this feature is kept enabled. By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine to nobody . Root squashing is controlled by the default option root_squash ; for more information about this option, refer to Section 9.7.1, "The /etc/exports Configuration File" . If possible, never disable root squashing. When exporting an NFS share as read-only, consider using the all_squash option. This option makes every user accessing the exported file system take the user ID of the nfsnobody user.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s2-nfs-security-files
Chapter 8. cert-manager Operator for Red Hat OpenShift
Chapter 8. cert-manager Operator for Red Hat OpenShift 8.1. cert-manager Operator for Red Hat OpenShift overview The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. The cert-manager Operator for Red Hat OpenShift allows you to integrate with external certificate authorities and provides certificate provisioning, renewal, and retirement. 8.1.1. About the cert-manager Operator for Red Hat OpenShift The cert-manager project introduces certificate authorities and certificates as resource types in the Kubernetes API, which makes it possible to provide certificates on demand to developers working within your cluster. The cert-manager Operator for Red Hat OpenShift provides a supported way to integrate cert-manager into your OpenShift Container Platform cluster. The cert-manager Operator for Red Hat OpenShift provides the following features: Support for integrating with external certificate authorities Tools to manage certificates Ability for developers to self-serve certificates Automatic certificate renewal Important Do not attempt to use both cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform and the community cert-manager Operator at the same time in your cluster. Also, you should not install cert-manager Operator for Red Hat OpenShift for OpenShift Container Platform in multiple namespaces within a single OpenShift cluster. 8.1.2. Supported issuer types The cert-manager Operator for Red Hat OpenShift supports the following issuer types: Automated Certificate Management Environment (ACME) Certificate authority (CA) Self-signed Vault Venafi 8.1.3. Certificate request methods There are two ways to request a certificate using the cert-manager Operator for Red Hat OpenShift: Using the cert-manager.io/CertificateRequest object With this method a service developer creates a CertificateRequest object with a valid issuerRef pointing to a configured issuer (configured by a service infrastructure administrator). A service infrastructure administrator then accepts or denies the certificate request. Only accepted certificate requests create a corresponding certificate. Using the cert-manager.io/Certificate object With this method, a service developer creates a Certificate object with a valid issuerRef and obtains a certificate from a secret that they pointed to the Certificate object. 8.1.4. Additional resources cert-manager project documentation 8.2. cert-manager Operator for Red Hat OpenShift release notes The cert-manager Operator for Red Hat OpenShift is a cluster-wide service that provides application certificate lifecycle management. These release notes track the development of cert-manager Operator for Red Hat OpenShift. For more information, see About the cert-manager Operator for Red Hat OpenShift . 8.2.1. Release notes for cert-manager Operator for Red Hat OpenShift 1.12.1 Issued: 2023-11-15 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.12.1: RHSA-2023:6269-02 Version 1.12.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.12.5 . For more information, see the cert-manager project release notes for v1.12.5 . 8.2.1.1. Bug fixes Previously, in a multi-architecture environment, the cert-manager Operator pods were prone to failures because of the invalid node affinity configuration. With this fix, the cert-manager Operator pods run without any failures. ( OCPBUGS-19446 ) 8.2.1.2. CVEs CVE-2023-44487 CVE-2023-39325 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4911 CVE-2023-38545 CVE-2023-38546 8.2.2. Release notes for cert-manager Operator for Red Hat OpenShift 1.12.0 Issued: 2023-10-02 The following advisories are available for the cert-manager Operator for Red Hat OpenShift 1.12.0: RHEA-2023:5339 RHBA-2023:5412 Version 1.12.0 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.12.4 . For more information, see the cert-manager project release notes for v1.12.4 . 8.2.2.1. Bug fixes Previously, you could not configure the CPU and memory requests and limits for the cert-manager components such as cert-manager controller, CA injector, and Webhook. Now, you can configure the CPU and memory requests and limits for the cert-manager components by using the command-line interface (CLI). For more information, see Overriding CPU and memory limits for the cert-manager components . ( OCPBUGS-13830 ) Previously, if you updated the ClusterIssuer object, the cert-manager Operator for Red Hat OpenShift could not verify and update the change in the cluster issuer. Now, if you modify the ClusterIssuer object, the cert-manager Operator for Red Hat OpenShift verifies the ACME account registration and updates the change. ( OCPBUGS-8210 ) Previously, the cert-manager Operator for Red Hat OpenShift did not support enabling the --enable-certificate-owner-ref flag. Now, the cert-manager Operator for Red Hat OpenShift supports enabling the --enable-certificate-owner-ref flag by adding the spec.controllerConfig.overrideArgs field in the cluster object. After enabling the --enable-certificate-owner-ref flag, cert-manager can automatically delete the secret when the Certificate resource is removed from the cluster. For more information on enabling the --enable-certificate-owner-ref flag and deleting the TLS secret automatically, see Deleting a TLS secret automatically upon Certificate removal ( CM-98 ) Previously, the cert-manager Operator for Red Hat OpenShift could not pull the jetstack-cert-manager-container-v1.12.4-1 image. The cert-manager controller, CA injector, and Webhook pods were stuck in the ImagePullBackOff state. Now, the cert-manager Operator for Red Hat OpenShift pulls the jetstack-cert-manager-container-v1.12.4-1 image to run the cert-manager controller, CA injector, and Webhook pods successfully. ( OCPBUGS-19986 ) 8.2.3. Release notes for cert-manager Operator for Red Hat OpenShift 1.11.5 Issued: 2023-11-15 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.11.5: RHSA-2023:6279-03 The golang version is updated to the version 1.20.10 to fix Common Vulnerabilities and Exposures (CVEs). Version 1.11.5 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.11.5 . For more information, see the cert-manager project release notes for v1.11.5 . 8.2.3.1. Bug fixes Previously, in a multi-architecture environment, the cert-manager Operator pods were prone to failures because of the invalid node affinity configuration. With this fix, the cert-manager Operator pods run without any failures. ( OCPBUGS-19446 ) 8.2.3.2. CVEs CVE-2023-44487 CVE-2023-39325 CVE-2023-29409 CVE-2023-2602 CVE-2023-2603 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4911 CVE-2023-28484 CVE-2023-29469 CVE-2023-38545 CVE-2023-38546 8.2.4. Release notes for cert-manager Operator for Red Hat OpenShift 1.11.4 Issued: 2023-07-26 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.11.4: RHEA-2023:4081 The golang version is updated to the version 1.19.10 to fix Common Vulnerabilities and Exposures (CVEs). Version 1.11.4 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.11.4 . For more information, see the cert-manager project release notes for v1.11.4 . 8.2.4.1. Bug fixes Previously, the cert-manager Operator for Red Hat OpenShift did not allow you to install older versions of the cert-manager Operator for Red Hat OpenShift. Now, you can install older versions of the cert-manager Operator for Red Hat OpenShift using the web console or the command-line interface (CLI). For more information on how to use the web console to install older versions, see Installing the cert-manager Operator for Red Hat OpenShift . ( OCPBUGS-16393 ) 8.2.5. Release notes for cert-manager Operator for Red Hat OpenShift 1.11.1 Issued: 2023-06-21 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.11.1: RHEA-2023:113193 Version 1.11.1 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.11.1 . For more information, see the cert-manager project release notes for v1.11.1 . 8.2.5.1. New features and enhancements This is the general availability (GA) release of the cert-manager Operator for Red Hat OpenShift. 8.2.5.1.1. Setting log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift To troubleshoot issues with cert-manager and the cert-manager Operator for Red Hat OpenShift, you can now configure the log level verbosity by setting a log level for cert-manager and the cert-manager Operator for Red Hat OpenShift. For more information, see Configuring log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift . 8.2.5.1.2. Authenticating the cert-manager Operator for Red Hat OpenShift with AWS You can now configure cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS clusters with Security Token Service (STS) and without STS. For more information, see Authenticating the cert-manager Operator for Red Hat OpenShift on AWS Security Token Service and Authenticating the cert-manager Operator for Red Hat OpenShift on AWS . 8.2.5.1.3. Authenticating the cert-manager Operator for Red Hat OpenShift with GCP You can now configure cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP clusters with Workload Identity and without Workload Identity. For more information, see Authenticating the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity and Authenticating the cert-manager Operator for Red Hat OpenShift with GCP 8.2.5.2. Bug fixes Previously, the cm-acme-http-solver pod did not use the latest published Red Hat image registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9 . With this release, the cm-acme-http-solver pod uses the latest published Red Hat image registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9 . ( OCPBUGS-10821 ) Previously, the cert-manager Operator for Red Hat OpenShift did not support changing labels for cert-manager pods such as controller, CA injector, and Webhook pods. With this release, you can add labels to cert-manager pods. ( OCPBUGS-8466 ) Previously, you could not update the log verbosity level in the cert-manager Operator for Red Hat OpenShift. You can now update the log verbosity level by using an environmental variable OPERATOR_LOG_LEVEL in its subscription resource. ( OCPBUGS-9994 ) Previously, when uninstalling the cert-manager Operator for Red Hat OpenShift, if you select the Delete all operand instances for this operator checkbox in the OpenShift Container Platform web console, the Operator was not uninstalled properly. The cert-manager Operator for Red Hat OpenShift is now properly uninstalled. ( OCPBUGS-9960 ) Previously, the cert-manager Operator for Red Hat OpenShift did not support using Google workload identity federation. The cert-manager Operator for Red Hat OpenShift now supports using Google workload identity federation. ( OCPBUGS-9998 ) 8.2.5.3. Known issues After installing the cert-manager Operator for Red Hat OpenShift, if you navigate to Operators Installed Operators and select Operator details in the OpenShift Container Platform web console, you cannot see the cert-manager resources that are created across all namespaces. As a workaround, you can navigate to Home API Explorer to see the cert-manager resources. ( OCPBUGS-11647 ) After uninstalling the cert-manager Operator for Red Hat OpenShift by using the web console, the cert-manager Operator for Red Hat OpenShift does not remove the cert-manager controller, CA injector, and Webhook pods automatically from the cert-manager namespace. As a workaround, you can manually delete the cert-manager controller, CA injector, and Webhook pod deployments present in the cert-manager namespace. ( OCPBUGS-13679 ) 8.2.6. Release notes for cert-manager Operator for Red Hat OpenShift 1.10.3 Issued: 2023-08-08 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.10.3: RHSA-2023:4335 The version 1.10.3 of the cert-manager Operator for Red Hat OpenShift is based on the cert-manager upstream version v1.10.2 . With this release, the version of the cert-manager Operator for Red Hat OpenShift is 1.10.3 but the cert-manager operand version is 1.10.2 . For more information, see the cert-manager project release notes for v1.10.2 . 8.2.6.1. CVEs CVE-2022-41725 CVE-2022-41724 CVE-2023-24536 CVE-2023-24538 CVE-2023-24537 CVE-2023-24534 CVE-2022-41723 CVE-2023-29400 CVE-2023-24540 CVE-2023-24539 8.2.7. Release notes for cert-manager Operator for Red Hat OpenShift 1.10.2 Issued: 2023-03-23 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.10.2: RHEA-2023:1238 Version 1.10.2 of the cert-manager Operator for Red Hat OpenShift is based on the upstream cert-manager version v1.10.2 . For more information, see the cert-manager project release notes for v1.10.2 . Important If you used the Technology Preview version of the cert-manager Operator for Red Hat OpenShift, you must uninstall it and remove all related resources for the Technology Preview version before installing this version of the cert-manager Operator for Red Hat OpenShift. For more information, see Uninstalling the cert-manager Operator for Red Hat OpenShift . 8.2.7.1. New features and enhancements This is the general availability (GA) release of the cert-manager Operator for Red Hat OpenShift. The following issuer types are supported: Automated Certificate Management Environment (ACME) Certificate authority (CA) Self-signed The following ACME challenge types are supported: DNS-01 HTTP-01 The following DNS-01 providers for ACME issuers are supported: Amazon Route 53 Azure DNS Google Cloud DNS The cert-manager Operator for Red Hat OpenShift now supports injecting custom CA certificates and propagating cluster-wide egress proxy environment variables. You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. For more information, see Customizing cert-manager Operator API fields You can enable monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift by using a service monitor to perform the custom metrics scraping. After you have enabled monitoring for the cert-manager Operator for Red Hat OpenShift, you can query its metrics by using the OpenShift Container Platform web console. For more information, see Enabling monitoring for the cert-manager Operator for Red Hat OpenShift 8.2.7.2. Bug fixes Previously, the unsupportedConfigOverrides field replaced user-provided arguments instead of appending them. Now, the unsupportedConfigOverrides field properly appends user-provided arguments. ( CM-23 ) Warning Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster upgrades. Previously, the cert-manager Operator for Red Hat OpenShift was installed as a cluster Operator. With this release, the cert-manager Operator for Red Hat OpenShift is now properly installed as an OLM Operator. ( CM-35 ) 8.2.7.3. Known issues Using Route objects is not fully supported. Currently, to use cert-manager Operator for Red Hat OpenShift with Routes , users must create Ingress objects, which are translated to Route objects by the Ingress-to-Route Controller. ( CM-16 ) The cert-manager Operator for Red Hat OpenShift does not support using Azure Active Directory (Azure AD) pod identities to assign a managed identity to a pod. As a workaround, you can use a service principal to assign a managed identity. ( OCPBUGS-8665 ) The cert-manager Operator for Red Hat OpenShift does not support using Google workload identity federation. ( OCPBUGS-9998 ) When uninstalling the cert-manager Operator for Red Hat OpenShift, if you select the Delete all operand instances for this operator checkbox in the OpenShift Container Platform web console, the Operator is not uninstalled properly. As a workaround, do not select this checkbox when uninstalling the cert-manager Operator for Red Hat OpenShift. ( OCPBUGS-9960 ) 8.2.8. Release notes for cert-manager Operator for Red Hat OpenShift 1.7.1-1 (Technology Preview) Issued: 2022-04-11 The following advisory is available for the cert-manager Operator for Red Hat OpenShift 1.7.1-1: RHEA-2022:1273 For more information, see the cert-manager project release notes for v1.7.1 . 8.2.8.1. New features and enhancements This is the initial, Technology Preview release of the cert-manager Operator for Red Hat OpenShift. 8.2.8.2. Known issues Using Route objects is not fully supported. Currently, cert-manager Operator for Red Hat OpenShift integrates with Route objects by creating Ingress objects through the Ingress Controller. ( CM-16 ) 8.3. Installing the cert-manager Operator for Red Hat OpenShift The cert-manager Operator for Red Hat OpenShift is not installed in OpenShift Container Platform by default. You can install the cert-manager Operator for Red Hat OpenShift by using the web console. 8.3.1. Installing the cert-manager Operator for Red Hat OpenShift using the web console You can use the web console to install the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter cert-manager Operator for Red Hat OpenShift into the filter box. Select the cert-manager Operator for Red Hat OpenShift and click Install . Note From the cert-manager Operator for Red Hat OpenShift 1.12.0 and later, the z-stream versions of the upstream cert-manager operands such as cert-manager controller, CA injector, Webhook, and cert-manager Operator for Red Hat OpenShift are decoupled. For example, for the cert-manager Operator for Red Hat OpenShift 1.12.0 , the cert-manager operand version is v1.12.4 . On the Install Operator page: Update the Update channel , if necessary. The channel defaults to stable-v1 , which installs the latest stable release of the cert-manager Operator for Red Hat OpenShift. Choose the Installed Namespace for the Operator. The default Operator namespace is cert-manager-operator . If the cert-manager-operator namespace does not exist, it is created for you. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that cert-manager Operator for Red Hat OpenShift is listed with a Status of Succeeded in the cert-manager-operator namespace. Verify that cert-manager pods are up and running by entering the following command: USD oc get pods -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s You can use the cert-manager Operator for Red Hat OpenShift only after cert-manager pods are up and running. 8.3.2. Understanding update channels of the cert-manager Operator for Red Hat OpenShift Update channels are the mechanism by which you can declare the version of your cert-manager Operator for Red Hat OpenShift in your cluster. The cert-manager Operator for Red Hat OpenShift offers the following update channels: stable-v1 stable-v1.y 8.3.2.1. stable-v1 channel The stable-v1 channel is the default and suggested channel while installing the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel installs and updates the latest release version of the cert-manager Operator for Red Hat OpenShift. Select the stable-v1 channel if you want to use the latest stable release of the cert-manager Operator for Red Hat OpenShift. The stable-v1 channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1 channel. The Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version. 8.3.2.2. stable-v1.y channel The y-stream version of the cert-manager Operator for Red Hat OpenShift installs updates from the stable-v1.y channels such as stable-v1.10 , stable-v1.11 , and stable-v1.12 . Select the stable-v1.y channel if you want to use the y-stream version and stay updated to the z-stream version of the cert-manager Operator for Red Hat OpenShift. The stable-v1.y channel offers the following update approval strategies: Automatic If you choose automatic updates for an installed cert-manager Operator for Red Hat OpenShift, a new z-stream version of the cert-manager Operator for Red Hat OpenShift is available in the stable-v1.y channel. OLM automatically upgrades the running instance of your Operator without human intervention. Manual If you select manual updates, when a newer version of the cert-manager Operator for Red Hat OpenShift is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the cert-manager Operator for Red Hat OpenShift updated to the new version of the z-stream releases. 8.3.3. Additional resources Adding Operators to a cluster Updating installed Operators 8.4. Configuring an ACME issuer The cert-manager Operator for Red Hat OpenShift supports using Automated Certificate Management Environment (ACME) CA servers, such as Let's Encrypt, to issue certificates. Explicit credentials are configured by specifying the secret details in the Issuer API object. Ambient credentials are extracted from the environment, metadata services, or local files which are not explicitly configured in the Issuer API object. Note The Issuer object is namespace scoped. It can only issue certificates from the same namespace. You can also use the ClusterIssuer object to issue certificates across all namespaces in the cluster. Example YAML file that defines the ClusterIssuer object apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme: ... Note By default, you can use the ClusterIssuer object with ambient credentials. To use the Issuer object with ambient credentials, you must enable the --issuer-ambient-credentials setting for the cert-manager controller. 8.4.1. About ACME issuers The ACME issuer type for the cert-manager Operator for Red Hat OpenShift represents an Automated Certificate Management Environment (ACME) certificate authority (CA) server. ACME CA servers rely on a challenge to verify that a client owns the domain names that the certificate is being requested for. If the challenge is successful, the cert-manager Operator for Red Hat OpenShift can issue the certificate. If the challenge fails, the cert-manager Operator for Red Hat OpenShift does not issue the certificate. Note Private DNS zones are not supported with Let's Encrypt and internet ACME servers. 8.4.1.1. Supported ACME challenges types The cert-manager Operator for Red Hat OpenShift supports the following challenge types for ACME issuers: HTTP-01 With the HTTP-01 challenge type, you provide a computed key at an HTTP URL endpoint in your domain. If the ACME CA server can get the key from the URL, it can validate you as the owner of the domain. For more information, see HTTP01 in the upstream cert-manager documentation. Note HTTP-01 requires that the Let's Encrypt servers can access the route of the cluster. If an internal or private cluster is behind a proxy, the HTTP-01 validations for certificate issuance fail. The HTTP-01 challenge is restricted to port 80. For more information, see HTTP-01 challenge (Let's Encrypt). DNS-01 With the DNS-01 challenge type, you provide a computed key at a DNS TXT record. If the ACME CA server can get the key by DNS lookup, it can validate you as the owner of the domain. For more information, see DNS01 in the upstream cert-manager documentation. 8.4.1.2. Supported DNS-01 providers The cert-manager Operator for Red Hat OpenShift supports the following DNS-01 providers for ACME issuers: Amazon Route 53 Azure DNS Note The cert-manager Operator for Red Hat OpenShift does not support using Azure Active Directory (Azure AD) pod identities to assign a managed identity to a pod. Google Cloud DNS 8.4.2. Configuring an ACME issuer to solve HTTP-01 challenges You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve HTTP-01 challenges. This procedure uses Let's Encrypt as the ACME CA server. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a service that you want to expose. In this procedure, the service is named sample-workload . Procedure Create an ACME cluster issuer. Create a YAML file that defines the ClusterIssuer object: Example acme-cluster-issuer.yaml file apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: class: openshift-default 4 1 Provide a name for the cluster issuer. 2 Replace <secret_private_key> with the name of secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Specify the Ingress class. Create the ClusterIssuer object by running the following command: USD oc create -f acme-cluster-issuer.yaml Create an Ingress to expose the service of the user workload. Create a YAML file that defines a Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1 1 Specify the namespace for the Ingress. Create the Namespace object by running the following command: USD oc create -f namespace.yaml Create a YAML file that defines the Ingress object: Example ingress.yaml file apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 acme.cert-manager.io/http01-ingress-class: openshift-default 4 spec: ingressClassName: openshift-default 5 tls: - hosts: - <hostname> 6 secretName: sample-tls 7 rules: - host: <hostname> 8 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 9 port: number: 80 1 Specify the name of the Ingress. 2 Specify the namespace that you created for the Ingress. 3 Specify the cluster issuer that you created. 4 Specify the Ingress class. 5 Specify the Ingress class. 6 Replace <hostname> with the Subject Alternative Name to be associated with the certificate. This name is used to add DNS names to the certificate. 7 Specify the secret to store the created certificate in. 8 Replace <hostname> with the hostname. You can use the <host_name>.<cluster_ingress_domain> syntax to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. For example, you might use apps.<cluster_base_domain> . Otherwise, you must ensure that a DNS record exists for the chosen hostname. 9 Specify the name of the service to expose. This example uses a service named sample-workload . Create the Ingress object by running the following command: USD oc create -f ingress.yaml 8.4.3. Configuring an ACME issuer by using explicit credentials for AWS Route53 You can use cert-manager Operator for Red Hat OpenShift to set up an Automated Certificate Management Environment (ACME) issuer to solve DNS-01 challenges by using explicit credentials on AWS. This procedure uses Let's Encrypt as the ACME certificate authority (CA) server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites You must provide the explicit accessKeyID and secretAccessKey credentials. For more information, see Route53 in the upstream cert-manager documentation. Note You can use Amazon Route 53 with explicit credentials in an OpenShift Container Platform cluster that is not running on AWS. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Create a secret to store your AWS credentials in by running the following command: USD oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \ 1 -n my-issuer-namespace 1 Replace <aws_secret_access_key> with your AWS secret access key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: "aws-secret" 9 key: "awsSecretAccessKey" 10 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <aws_key_id> with your AWS key ID. 7 Replace <hosted_zone_id> with your hosted zone ID. 8 Replace <region_name> with the AWS region name. For example, us-east-1 . 9 Specify the name of the secret you created. 10 Specify the key in the secret you created that stores your AWS secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 8.4.4. Configuring an ACME issuer by using ambient credentials on AWS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on AWS. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Amazon Route 53. Prerequisites If your cluster is configured to use the AWS Security Token Service (STS), you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster section. If your cluster does not use the AWS STS, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: "<email_address>" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1 1 Provide a name for the issuer. 2 Specify the namespace that you created for the issuer. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <email_address> with your email address. 5 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 6 Replace <hosted_zone_id> with your hosted zone ID. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 8.4.5. Configuring an ACME issuer by using explicit credentials for GCP Cloud DNS You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites You have set up Google Cloud service account with a desired role for Google CloudDNS. For more information, see Google CloudDNS in the upstream cert-manager documentation. Note You can use Google CloudDNS with explicit credentials in an OpenShift Container Platform cluster that is not running on GCP. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your GCP credentials by running the following command: USD oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <project_id> with the name of the GCP project that contains the Cloud DNS zone. 6 Specify the name of the secret you created. 7 Specify the key in the secret you created that stores your GCP secret access key. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 8.4.6. Configuring an ACME issuer by using ambient credentials on GCP You can use the cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using ambient credentials on GCP. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Google CloudDNS. Prerequisites If your cluster is configured to use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity section. If your cluster does not use GCP Workload Identity, you followed the instructions from the Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP section. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project <issuer_namespace> Modify the CertManager resource to add the --issuer-ambient-credentials argument: USD oc patch certmanager/cluster \ --type=merge \ -p='{"spec":{"controllerConfig":{"overrideArgs":["--issuer-ambient-credentials"]}}}' Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4 1 Provide a name for the issuer. 2 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 3 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 4 Replace <gcp_project_id> with the name of the GCP project that contains the Cloud DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 8.4.7. Configuring an ACME issuer by using explicit credentials for Microsoft Azure DNS You can use cert-manager Operator for Red Hat OpenShift to set up an ACME issuer to solve DNS-01 challenges by using explicit credentials on Microsoft Azure. This procedure uses Let's Encrypt as the ACME CA server and shows how to solve DNS-01 challenges with Azure DNS. Prerequisites You have set up a service principal with desired role for Azure DNS. For more information, see Azure DNS in the upstream cert-manager documentation. Note You can follow this procedure for an OpenShift Container Platform cluster that is not running on Microsoft Azure. Procedure Optional: Override the nameserver settings for the DNS-01 self check. This step is required only when the target public-hosted zone overlaps with the cluster's default private-hosted zone. Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3 1 Add the spec.controllerConfig section. 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. You must use a 1.1.1.1:53 value to avoid the public and private zones overlapping. Save the file to apply the changes. Optional: Create a namespace for the issuer: USD oc new-project my-issuer-namespace Create a secret to store your Azure credentials in by running the following command: USD oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \ 1 2 3 -n my-issuer-namespace 1 Replace <secret_name> with your secret name. 2 Replace <azure_secret_access_key_name> with your Azure secret access key name. 3 Replace <azure_secret_access_key_value> with your Azure secret key. Create an issuer: Create a YAML file that defines the Issuer object: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: "" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud 1 Provide a name for the issuer. 2 Replace <issuer_namespace> with your issuer namespace. 3 Replace <secret_private_key> with the name of the secret to store the ACME account private key in. 4 Specify the URL to access the ACME server's directory endpoint. This example uses the Let's Encrypt staging environment. 5 Replace <azure_client_id> with your Azure client ID. 6 Replace <secret_name> with a name of the client secret. 7 Replace <azure_secret_access_key_name> with the client secret key name. 8 Replace <azure_subscription_id> with your Azure subscription ID. 9 Replace <azure_tenant_id> with your Azure tenant ID. 10 Replace <azure_dns_zone_resource_group> with the name of the Azure DNS zone resource group. 11 Replace <azure_dns_zone> with the name of Azure DNS zone. Create the Issuer object by running the following command: USD oc create -f issuer.yaml 8.4.8. Additional resources Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP 8.5. Configuring certificates with an issuer By using the cert-manager Operator for Red Hat OpenShift, you can manage certificates, handling tasks such as renewal and issuance, for workloads within the cluster, as well as components interacting externally to the cluster. 8.5.1. Creating certificates for user workloads Prerequisites You have access to the cluster with cluster-admin privileges. You have installed the cert-manager Operator for Red Hat OpenShift. Procedure Create an issuer. For more information, see "Configuring an issuer" in the "Additional resources" section. Create a certificate: Create a YAML file, for example, certificate.yaml , that defines the Certificate object: Example certificate.yaml file apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - "<domain_name>" 5 issuerRef: name: <issuer_name> 6 kind: Issuer 1 Provide a name for the certificate. 2 Specify the namespace of the issuer. 3 Specify the common name (CN). 4 Specify the name of the secret to create that contains the certificate. 5 Specify the domain name. 6 Specify the name of the issuer. Create the Certificate object by running the following command: USD oc create -f certificate.yaml Verification Verify that the certificate is created and ready to use by running the following command: USD oc get certificate -w -n <issuer_namespace> Once certificate is in Ready status, workloads on your cluster can start using the generated certificate secret. 8.5.2. Additional resources Configuring an issuer Supported issuer types Configuring an ACME issuer 8.6. Enabling monitoring for the cert-manager Operator for Red Hat OpenShift You can expose controller metrics for the cert-manager Operator for Red Hat OpenShift in the format provided by the Prometheus Operator. 8.6.1. Enabling monitoring by using a service monitor for the cert-manager Operator for Red Hat OpenShift You can enable monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift by using a service monitor to perform the custom metrics scraping. Prerequisites You have access to the cluster with cluster-admin privileges. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Add the label to enable cluster monitoring by running the following command: USD oc label namespace cert-manager openshift.io/cluster-monitoring=true Create a service monitor: Create a YAML file that defines the Role , RoleBinding , and ServiceMonitor objects: Example monitoring.yaml file apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager Create the Role , RoleBinding , and ServiceMonitor objects by running the following command: USD oc create -f monitoring.yaml Additional resources Setting up metrics collection for user-defined projects 8.6.2. Querying metrics for the cert-manager Operator for Red Hat OpenShift After you have enabled monitoring for the cert-manager Operator for Red Hat OpenShift, you can query its metrics by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the cert-manager Operator for Red Hat OpenShift. You have enabled monitoring and metrics collection for the cert-manager Operator for Red Hat OpenShift. Procedure From the OpenShift Container Platform web console, navigate to Observe Metrics . Add a query by using one of the following formats: Specify the endpoints: {instance="<endpoint>"} 1 1 Replace <endpoint> with the value of the endpoint for the cert-manager service. You can find the endpoint value by running the following command: oc describe service cert-manager -n cert-manager . Specify the tcp-prometheus-servicemonitor port: {endpoint="tcp-prometheus-servicemonitor"} 8.7. Configuring the egress proxy for the cert-manager Operator for Red Hat OpenShift If a cluster-wide egress proxy is configured in OpenShift Container Platform, Operator Lifecycle Manager (OLM) automatically configures Operators that it manages with the cluster-wide proxy. OLM automatically updates all of the Operator's deployments with the HTTP_PROXY , HTTPS_PROXY , NO_PROXY environment variables. You can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. 8.7.1. Injecting a custom CA certificate for the cert-manager Operator for Red Hat OpenShift If your OpenShift Container Platform cluster has the cluster-wide proxy enabled, you can inject any CA certificates that are required for proxying HTTPS connections into the cert-manager Operator for Red Hat OpenShift. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have enabled the cluster-wide proxy for OpenShift Container Platform. Procedure Create a config map in the cert-manager namespace by running the following command: USD oc create configmap trusted-ca -n cert-manager Inject the CA bundle that is trusted by OpenShift Container Platform into the config map by running the following command: USD oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager Update the deployment for the cert-manager Operator for Red Hat OpenShift to use the config map by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}}' Verification Verify that the deployments have finished rolling out by running the following command: USD oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && \ oc rollout status deployment/cert-manager -n cert-manager && \ oc rollout status deployment/cert-manager-webhook -n cert-manager && \ oc rollout status deployment/cert-manager-cainjector -n cert-manager Example output deployment "cert-manager-operator-controller-manager" successfully rolled out deployment "cert-manager" successfully rolled out deployment "cert-manager-webhook" successfully rolled out deployment "cert-manager-cainjector" successfully rolled out Verify that the CA bundle was mounted as a volume by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'} Example output [{"mountPath":"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt","name":"trusted-ca","subPath":"ca-bundle.crt"}] Verify that the source of the CA bundle is the trusted-ca config map by running the following command: USD oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes} Example output [{"configMap":{"defaultMode":420,"name":"trusted-ca"},"name":"trusted-ca"}] 8.7.2. Additional resources Configuring proxy support in Operator Lifecycle Manager 8.8. Customizing cert-manager Operator API fields You can customize the cert-manager Operator for Red Hat OpenShift API fields by overriding environment variables and arguments. Warning To override unsupported arguments, you can add spec.unsupportedConfigOverrides section in the CertManager resource, but using spec.unsupportedConfigOverrides is unsupported. 8.8.1. Customizing cert-manager by overriding environment variables from the cert-manager Operator API You can override the supported environment variables for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3 1 2 Replace <proxy_url> with the proxy server URL. 3 Replace <ignore_proxy_domains> with a comma separated list of domains. These domains are ignored by the proxy server. Save your changes and quit the text editor to apply your changes. Verification Verify that the cert-manager controller pod is redeployed by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that environment variables are updated for the cert-manager pod by running the following command: USD oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml Example output env: ... - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS> 8.8.2. Customizing cert-manager by overriding arguments from the cert-manager Operator API You can override the supported arguments for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster ... spec: ... controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<host>:<port>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8 1 Provide a comma-separated list of <host>:<port> nameservers to query for the DNS-01 self check. For example, --dns01-recursive-nameservers=1.1.1.1:53 . 2 Specify to only use recursive nameservers instead of checking the authoritative nameservers associated with that domain. 3 Provide a comma-separated list of <host>:<port> nameservers to query for the Automated Certificate Management Environment (ACME) HTTP01 self check. For example, --acme-http01-solver-nameservers=1.1.1.1:53 . 4 7 8 Specify to set the log level verbosity to determine the verbosity of log messages. 5 Specify the host and port for the metrics endpoint. The default value is --metrics-listen-address=0.0.0.0:9402 . 6 You must use the --issuer-ambient-credentials argument when configuring an ACME Issuer to solve DNS-01 challenges by using ambient credentials. Save your changes and quit the text editor to apply your changes. Verification Verify that arguments are updated for cert-manager pods by running the following command: USD oc get pods -n cert-manager -o yaml Example output ... metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager ... spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 ... metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager ... spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 ... metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager ... spec: containers: - args: ... - --v=4 8.8.3. Deleting a TLS secret automatically upon Certificate removal You can enable the --enable-certificate-owner-ref flag for the cert-manager Operator for Red Hat OpenShift by adding a spec.controllerConfig section in the CertManager resource. The --enable-certificate-owner-ref flag sets the certificate resource as an owner of the secret where the TLS certificate is stored. Warning If you uninstall the cert-manager Operator for Red Hat OpenShift or delete certificate resources from the cluster, the secret is deleted automatically. This might cause network connectivity issues depending upon where the certificate TLS secret is being used. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the Certificate object and its secret are available by running the following command: USD oc get certificate Example output NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h Edit the CertManager resource by running the following command: USD oc edit certmanager cluster Add a spec.controllerConfig section with the following override arguments: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster # ... spec: # ... controllerConfig: overrideArgs: - '--enable-certificate-owner-ref' Save your changes and quit the text editor to apply your changes. Verification Verify that the --enable-certificate-owner-ref flag is updated for cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml Example output # ... metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager # ... spec: containers: - args: - --enable-certificate-owner-ref 8.8.4. Overriding CPU and memory limits for the cert-manager components After installing the cert-manager Operator for Red Hat OpenShift, you can configure the CPU and memory limits from the cert-manager Operator for Red Hat OpenShift API for the cert-manager components such as cert-manager controller, CA injector, and Webhook. Prerequisites You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. You have installed version 1.12.0 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Check that the deployments of the cert-manager controller, CA injector, and Webhook are available by entering the following command: USD oc get deployment -n cert-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m Before setting the CPU and memory limit, check the existing configuration for the cert-manager controller, CA injector, and Webhook by entering the following command: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3 # ... 1 2 3 The spec.resources field is empty by default. The cert-manager components do not have CPU and memory limits. To configure the CPU and memory limits for the cert-manager controller, CA injector, and Webhook, enter the following command: USD oc patch certmanager.operator cluster --type=merge -p=" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 " 1 Defines the maximum amount of CPU and memory that a single container in a cert-manager controller pod can request. 2 5 You can specify the CPU limit that a cert-manager controller pod can request. The default value is 10m . 3 6 You can specify the memory limit that a cert-manager controller pod can request. The default value is 32Mi . 4 Defines the amount of CPU and memory set by scheduler for the cert-manager controller pod. 7 Defines the maximum amount of CPU and memory that a single container in a CA injector pod can request. 8 11 You can specify the CPU limit that a CA injector pod can request. The default value is 10m . 9 12 You can specify the memory limit that a CA injector pod can request. The default value is 32Mi . 10 Defines the amount of CPU and memory set by scheduler for the CA injector pod. 13 Defines the maximum amount of CPU and memory Defines the maximum amount of CPU and memory that a single container in a Webhook pod can request. 14 17 You can specify the CPU limit that a Webhook pod can request. The default value is 10m . 15 18 You can specify the memory limit that a Webhook pod can request. The default value is 32Mi . 16 Defines the amount of CPU and memory set by scheduler for the Webhook pod. Example output certmanager.operator.openshift.io/cluster patched Verification Verify that the CPU and memory limits are updated for the cert-manager components: USD oc get deployment -n cert-manager -o yaml Example output # ... metadata: name: cert-manager namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-cainjector namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... metadata: name: cert-manager-webhook namespace: cert-manager # ... spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi # ... 8.9. Authenticating the cert-manager Operator for Red Hat OpenShift with AWS Security Token Service You can authenticate the cert-manager Operator for Red Hat OpenShift on the AWS Security Token Service (STS) cluster. You can configure cloud credentials for the cert-manager Operator for Red Hat OpenShift by using the ccoctl binary. 8.9.1. Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift for the AWS Security Token Service cluster To configure the cloud credentials for the cert-manager Operator for Red Hat OpenShift on the AWS Security Token Service (STS) cluster with the cloud credentials. You must generate the cloud credentials manually, and apply it on the cluster by using the ccoctl binary. Prerequisites You have extracted and prepared the ccoctl binary. You have configured an OpenShift Container Platform cluster with AWS STS by using the Cloud Credential Operator in manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request Create a CredentialsRequest resource YAML file under the credentials-request directory, such as, sample-credential-request.yaml , by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name <user_defined_name> --region=<aws_region> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir> Example output 2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, "arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds" Add the eks.amazonaws.com/role-arn="<aws_role_arn>" annotation to the service account by running the following command: USD oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn="<aws_role_arn>" To create a new pod, delete the existing cert-manager controller pod by running the following command: USD oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager The AWS credentials are applied to a new cert-manager controller pod within a minute. Verification Get the name of the updated cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s Verify that AWS credentials are updated by running the following command: USD oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list Example output # pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller # POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token 8.9.2. Additional resources Configuring the Cloud Credential Operator utility 8.10. Configuring log levels for cert-manager and the cert-manager Operator for Red Hat OpenShift To troubleshoot issues with the cert-manager components and the cert-manager Operator for Red Hat OpenShift, you can configure the log level verbosity. Note To use different log levels for different cert-manager components, see Customizing cert-manager Operator API fields . 8.10.1. Setting a log level for cert-manager You can set a log level for cert-manager to determine the verbosity of log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Edit the CertManager resource by running the following command: USD oc edit certmanager.operator cluster Set the log level value by editing the spec.logLevel section: apiVersion: operator.openshift.io/v1alpha1 kind: CertManager ... spec: logLevel: Normal 1 1 The default logLevel is Normal . Replace Normal with the desired log level value. The valid log level values for the CertManager resource are Normal , Debug , Trace , and TraceAll . To audit logs and perform common operations when everything is fine, set logLevel to Normal . To troubleshoot a minor issue by viewing verbose logs, set logLevel to Debug . To troubleshoot a major issue by viewing more verbose logs, you can set logLevel to Trace . To troubleshoot serious issues, set logLevel to TraceAll . Note TraceAll generates huge amount of logs. After setting logLevel to TraceAll , you might experience performance issues. Save your changes and quit the text editor to apply your changes. After applying the changes, the verbosity level for the cert-manager components controller, CA injector, and webhook is updated. 8.10.2. Setting a log level for the cert-manager Operator for Red Hat OpenShift You can set a log level for the cert-manager Operator for Red Hat OpenShift to determine the verbosity of the operator log messages. Prerequisites You have access to the cluster with cluster-admin privileges. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. Procedure Update the subscription object for cert-manager Operator for Red Hat OpenShift to provide the verbosity level for the operator logs by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{"spec":{"config":{"env":[{"name":"OPERATOR_LOG_LEVEL","value":"v"}]}}}' 1 1 Replace v with the desired log level number. The valid values for v can range from 1`to `10 . The default value is 2 . Verification The cert-manager Operator pod is redeployed. Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the following command: USD oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container Example output # deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 # deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9 Verify that the log level of the cert-manager Operator for Red Hat OpenShift is updated by running the oc logs command: USD oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator 8.10.3. Additional resources Customizing cert-manager Operator API fields 8.11. Authenticating the cert-manager Operator for Red Hat OpenShift on AWS You can configure the cloud credentials for the cert-manager Operator for Red Hat OpenShift on the AWS cluster. The cloud credentials are generated by the Cloud Credential Operator. 8.11.1. Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on AWS To configure the cloud credentials for the cert-manager Operator for Red Hat OpenShift on the AWS cluster you must generate the cloud credentials secret by creating a CredentialsRequest object, and allowing the Cloud Credential Operator. Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, for example, sample-credential-request.yaml , as follows: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "route53:GetChange" effect: Allow resource: "arn:aws:route53:::change/*" - action: - "route53:ChangeResourceRecordSets" - "route53:ListResourceRecordSets" effect: Allow resource: "arn:aws:route53:::hostedzone/*" - action: - "route53:ListHostedZonesByName" effect: Allow resource: "*" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"aws-creds"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with AWS credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output ... spec: containers: - args: ... - mountPath: /.aws name: cloud-credentials ... volumes: ... - name: cloud-credentials secret: ... secretName: aws-creds 8.12. Authenticating the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity You can authenticate the cert-manager Operator for Red Hat OpenShift on the GCP Workload Identity cluster by using the cloud credentials. You can configure the cloud credentials by using the ccoctl binary. 8.12.1. Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift with GCP Workload Identity Generate the cloud credentials for the cert-manager Operator for Red Hat OpenShift by using the ccoctl binary. Then, apply them to the GCP Workload Identity cluster. Prerequisites You extracted and prepared the ccoctl binary. You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured an OpenShift Container Platform cluster with GCP Workload Identity by using the Cloud Credential Operator in a manual mode. Procedure Create a directory to store a CredentialsRequest resource YAML file by running the following command: USD mkdir credentials-request In the credentials-request directory, create a YAML file that contains the following CredentialsRequest manifest: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Note The dns.admin role provides admin privileges to the service account for managing Google Cloud DNS resources. To ensure that the cert-manager runs with the service account that has the least privilege, you can create a custom role with the following permissions: dns.resourceRecordSets.* dns.changes.* dns.managedZones.list Use the ccoctl tool to process CredentialsRequest objects by running the following command: USD ccoctl gcp create-service-accounts \ --name <user_defined_name> --output-dir=<path_to_output_dir> \ --credentials-requests-dir=<path_to_credrequests_dir> \ --workload-identity-pool <workload_identity_pool> \ --workload-identity-provider <workload_identity_provider> \ --project <gcp_project_id> Example command USD ccoctl gcp create-service-accounts \ --name abcde-20230525-4bac2781 --output-dir=/home/outputdir \ --credentials-requests-dir=/home/credentials-requests \ --workload-identity-pool abcde-20230525-4bac2781 \ --workload-identity-provider abcde-20230525-4bac2781 \ --project openshift-gcp-devel Apply the secrets generated in the manifests directory of your cluster by running the following command: USD ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP workload identity credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token ... - mountPath: /.config/gcloud name: cloud-credentials ... volumes: - name: bound-sa-token projected: ... sources: - serviceAccountToken: audience: openshift ... path: token - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials 8.12.2. Additional resources Configuring the Cloud Credential Operator utility Configuring an OpenShift Container Platform cluster by using the manual mode with GCP Workload Identity 8.13. Authenticating the cert-manager Operator for Red Hat OpenShift on GCP You can configure cloud credentials for the cert-manager Operator for Red Hat OpenShift on a GCP cluster. The cloud credentials are generated by the Cloud Credential Operator. 8.13.1. Configuring cloud credentials for the cert-manager Operator for Red Hat OpenShift on GCP To configure the cloud credentials for the cert-manager Operator for Red Hat OpenShift on a GCP cluster you must create a CredentialsRequest object, and allow the Cloud Credential Operator to generate the cloud credentials secret. Prerequisites You have installed version 1.11.1 or later of the cert-manager Operator for Red Hat OpenShift. You have configured the Cloud Credential Operator to operate in mint or passthrough mode. Procedure Create a CredentialsRequest resource YAML file, such as, sample-credential-request.yaml by applying the following yaml: apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager Create a CredentialsRequest resource by running the following command: USD oc create -f sample-credential-request.yaml Update the subscription object for cert-manager Operator for Red Hat OpenShift by running the following command: USD oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{"spec":{"config":{"env":[{"name":"CLOUD_CREDENTIALS_SECRET_NAME","value":"gcp-credentials"}]}}}' Verification Get the name of the redeployed cert-manager controller pod by running the following command: USD oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager Example output NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s Verify that the cert-manager controller pod is updated with GCP workload identity credential volumes that are mounted under the path specified in mountPath by running the following command: USD oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml Example output spec: containers: - args: ... volumeMounts: ... - mountPath: /.config/gcloud name: cloud-credentials .... volumes: ... - name: cloud-credentials secret: ... items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials 8.14. Uninstalling the cert-manager Operator for Red Hat OpenShift You can remove the cert-manager Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 8.14.1. Uninstalling the cert-manager Operator for Red Hat OpenShift You can uninstall the cert-manager Operator for Red Hat OpenShift by using the web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The cert-manager Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the cert-manager Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the cert-manager Operator for Red Hat OpenShift entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 8.14.2. Removing cert-manager Operator for Red Hat OpenShift resources Once you have uninstalled the cert-manager Operator for Red Hat OpenShift, you have the option to eliminate its associated resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove the deployments of the cert-manager components, such as cert-manager , cainjector , and webhook , present in the cert-manager namespace. Click the Project drop-down menu to see a list of all available projects, and select the cert-manager project. Navigate to Workloads Deployments . Select the deployment that you want to delete. Click the Actions drop-down menu, and select Delete Deployment to see a confirmation dialog box. Click Delete to delete the deployment. Alternatively, delete deployments of the cert-manager components such as cert-manager , cainjector and webhook present in the cert-manager namespace by using the command-line interface (CLI). USD oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager Optional: Remove the custom resource definitions (CRDs) that were installed by the cert-manager Operator for Red Hat OpenShift: Navigate to Administration CustomResourceDefinitions . Enter certmanager in the Name field to filter the CRDs. Click the Options menu to each of the following CRDs, and select Delete Custom Resource Definition : Certificate CertificateRequest CertManager ( operator.openshift.io ) Challenge ClusterIssuer Issuer Order Optional: Remove the cert-manager-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the cert-manager-operator and select Delete Namespace . In the confirmation dialog, enter cert-manager-operator in the field and click Delete .
[ "oc get pods -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: class: openshift-default 4", "oc create -f acme-cluster-issuer.yaml", "apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1", "oc create -f namespace.yaml", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 acme.cert-manager.io/http01-ingress-class: openshift-default 4 spec: ingressClassName: openshift-default 5 tls: - hosts: - <hostname> 6 secretName: sample-tls 7 rules: - host: <hostname> 8 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 9 port: number: 80", "oc create -f ingress.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud", "oc create -f issuer.yaml", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n <issuer_namespace>", "oc label namespace cert-manager openshift.io/cluster-monitoring=true", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager", "oc create -f monitoring.yaml", "{instance=\"<endpoint>\"} 1", "{endpoint=\"tcp-prometheus-servicemonitor\"}", "oc create configmap trusted-ca -n cert-manager", "oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'", "oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager", "deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}", "[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}", "[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml", "env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<host>:<port>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8", "oc get pods -n cert-manager -o yaml", "metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4", "oc get certificate", "NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml", "metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref", "oc get deployment -n cert-manager", "NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3", "oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"", "certmanager.operator.openshift.io/cluster patched", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>", "2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds", "oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list", "pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token", "oc edit certmanager.operator cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: Normal 1", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1", "oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container", "deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9", "oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>", "ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel", "ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_and_compliance/cert-manager-operator-for-red-hat-openshift
8.2. Fixing ID Conflicts
8.2. Fixing ID Conflicts IdM uses ID ranges to avoid collisions of POSIX IDs from different domains. For details on ID ranges, see ID Ranges in the Linux Domain Identity, Authentication, and Policy Guide . POSIX IDs in ID views do not use a special range type, because IdM must allow overlaps with other kinds of ID ranges. For example, AD users created through synchronization have POSIX IDs from the same ID range as IdM users. POSIX IDs are managed manually in ID views on the IdM side. Therefore, if an ID collision occurs, fix it by changing the conflicting IDs.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/id-views-id-collisions
6.5. xguest: Kiosk Mode
6.5. xguest: Kiosk Mode The xguest package provides a kiosk user account. This account is used to secure machines that people walk up to and use, such as those at libraries, banks, airports, information kiosks, and coffee shops. The kiosk user account is very limited: essentially, it only allows a user to log in and use Firefox to browse Internet websites. Guest user is assigned to xguest_u , see Table 3.1, "SELinux User Capabilities" . Any changes made while logged in with this account, such as creating files or changing settings, are lost when you log out. To set up the kiosk account: As root, install the xguest package. Install dependencies as required: In order to allow the kiosk account to be used by a variety of people, the account is not password-protected, and as such, the account can only be protected if SELinux is running in enforcing mode. Before logging in with this account, use the getenforce utility to confirm that SELinux is running in enforcing mode: If this is not the case, see Section 4.4, "Permanent Changes in SELinux States and Modes" for information about changing to enforcing mode. It is not possible to log in with this account if SELinux is in permissive mode or disabled. You can only log in to this account using the GNOME Display Manager (GDM). Once the xguest package is installed, a Guest account is added to the GDM login screen.
[ "~]# yum install xguest", "~]USD getenforce Enforcing" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-confining_users-xguest_kiosk_mode
Chapter 10. Detecting duplicate messages
Chapter 10. Detecting duplicate messages You can configure the broker to automatically detect and filter duplicate messages. This means that you do not have to implement your own duplicate detection logic. Without duplicate detection, in the event of an unexpected connection failure, a client cannot determine whether a message it sent to the broker was received. In this situation, the client might assume that the broker did not receive the message, and resend it. This results in a duplicate message. For example, suppose that a client sends a message to the broker. If the broker or connection fails before the message is received and processed by the broker, the message never arrives at its address. The client does not receive a response from the broker due to the failure. If the broker or connection fails after the message is received and processed by the broker, the message is routed correctly, but the client still does not receive a response. In addition, using a transaction to determine success does not necessarily help in these cases. If the broker or connection fails while the transaction commit is being processed, the client is still unable to determine whether it successfully sent the message. In these situations, to correct the assumed failure, the client resends the most recent message. The result might be a duplicate message that negatively impacts your system. For example, if you are using the broker in an order-fulfilment system, a duplicate message might mean that a purchase order is processed twice. The following procedures show how to configure duplicate message detection to protect against these types of situations. 10.1. Configuring the duplicate ID cache To enable the broker to detect duplicate messages, producers must provide unique values for the message property _AMQ_DUPL_ID when sending each message. The broker maintains caches of received values of the _AMQ_DUPL_ID property. When a broker receives a new message on an address, it checks the cache for that address to ensure that it has not previously processed a message with the same value for this property. Each address has its own cache. Each cache is circular and fixed in size. This means that new entries replace the oldest ones as cache space demands. The following procedure shows how to globally configure the ID cache used by each address on the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the id-cache-size and persist-id-cache properties and specify values. For example: <configuration> <core> ... <id-cache-size>5000</id-cache-size> <persist-id-cache>false</persist-id-cache> </core> </configuration> id-cache-size Maximum size of the ID cache, specified as the number of individual entries in the cache. The default value is 20,000 entries. In this example, the cache size is set to 5,000 entries. Note When the maximum size of the cache is reached, it is possible for the broker to start processing duplicate messages. For example, suppose that you set the size of the cache to 3000 . If a message arrived more than 3,000 messages before the arrival of a new message with the same value of _AMQ_DUPL_ID , the broker cannot detect the duplicate. This results in both messages being processed by the broker. persist-id-cache When the value of this property is set to true , the broker persists IDs to disk as they are received. The default value is true . In the example above, you disable persistence by setting the value to false . Additional resources To learn how to set the duplicate ID message property using the AMQ Core Protocol JMS client, see Using duplicate message detection in the AMQ Core Protocol JMS client documentation. 10.2. Configuring duplicate detection for cluster connections You can configure cluster connections to insert a duplicate ID header for each message that moves across the cluster. Prerequisites You should have already configured a broker cluster. For more information, see Section 14.2, "Creating a broker cluster" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, for a given cluster connection, add the use-duplicate-detection property and specify a value. For example: <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <use-duplicate-detection>true</use-duplicate-detection> ... </cluster-connection> ... </cluster-connections> </core> </configuration> use-duplicate-detection When the value of this property is set to true , the cluster connection inserts a duplicate ID header for each message that it handles.
[ "<configuration> <core> <id-cache-size>5000</id-cache-size> <persist-id-cache>false</persist-id-cache> </core> </configuration>", "<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <use-duplicate-detection>true</use-duplicate-detection> </cluster-connection> </cluster-connections> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/assembly-br-detecting-duplicate-messages_configuring
Release notes for the Red Hat build of Cryostat 2.4
Release notes for the Red Hat build of Cryostat 2.4 Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.4/index
20.12. Enabling Different Types of Binds
20.12. Enabling Different Types of Binds Whenever an entity logs into or accesses the Directory Server, it binds to the directory. There are many different types of bind operation, sometimes depending on the method of binding (such as simple binds or autobind) and some depending on the identity of user binding to the directory (anonymous and unauthenticated binds). The following sections contain configuration parameters that can increase the security of binds (as in Section 20.12.1, "Requiring Secure Binds" ) or streamline bind operations (such as Section 20.12.4, "Configuring Autobind" ). 20.12.1. Requiring Secure Binds A simple bind is when an entity uses a simple bind DN-password combination to authenticate to the Directory Server. Although it is possible to use a password file rather than sending a password directly through the command line, both methods still require sending or accessing a plaintext password over the wire. That makes the password vulnerable to anyone sniffing the connection. It is possible to require simple binds to occur over a secure connection (TLS or STARTTLS), which effectively encrypts the plaintext password as it is sent with the bind operation. (It is also possible to use alternatives to simple binds, such as SASL authentication and certificate-based authentication.) Important Along with regular users logging into the server and LDAP operations, server-to-server connections are affected by requiring secure connections for simple binds. Replication, synchronization, and database chaining can all use simple binds between servers, for instance. Make sure that replication agreements, sync agreements, and chaining configuration specify secure connections if the nsslapd-require-secure-binds attribute is turned on. Otherwise, these operations will fail. Note Requiring a secure connection for bind operations only applies to authenticated binds . Bind operations without a password (anonymous and unauthenticated binds) can proceed over standard connections. Set the nsslapd-require-secure-binds configuration parameter to on : Restart the instance: 20.12.2. Disabling Anonymous Binds If a user attempts to connect to the Directory Server without supplying any user name or password, this is an anonymous bind . Anonymous binds simplify common search and read operations, like checking the directory for a phone number or email address, by not requiring users to authenticate to the directory first. Note By default, anonymous binds are allowed (on) for search and read operations. This allows access to regular directory entries , which includes user and group entries as well as configuration entries like the root DSE. A different option, rootdse , allows anonymous search and read access to search the root DSE itself, but restricts access to all other directory entries. However, there are risks with anonymous binds. Adequate ACIs must be in place to restrict access to sensitive information and to disallow actions like modifies and deletes. Additionally, anonymous binds can be used for denial of service attacks or for malicious people to gain access to the server. Section 18.11.1.1.3, "Granting Anonymous Access" has an example on setting ACIs to control what anonymous users can access, and Section 14.5.4, "Setting Resource Limits on Anonymous Binds" has information on placing resource limits for anonymous users. If those options do not offer a sufficient level of security, then anonymous binds can be disabled entirely: Set the nsslapd-allow-anonymous-access configuration parameter to off : Restart the instance: Note With anonymous binds disabled, the users cannot log in using their RDN. They are required to provide the full DN to log in. In addition, when you disable anonymous binds, unauthenticated binds are also disabled automatically. 20.12.3. Allowing Unauthenticated Binds Unauthenticated binds are connections to Directory Server where a user supplies an empty password. Using the default settings, Directory Server denies access in this scenario for security reasons: Warning Red Hat recommends not enabling unauthenticated binds. This authentication method enables users to bind without supplying a password as any account, including the Directory Manager. After the bind, the user can access all data with the permissions of the account used to bind. To enable insecure unauthenticated binds, set the nsslapd-allow-unauthenticated-binds configuration option to on : 20.12.4. Configuring Autobind Autobind is a way to connect to the Directory Server based on local UNIX credentials, which are mapped to an identity stored in the directory itself. Autobind is configured in two parts: Before configuring autobind, first make sure that LDAPI is enabled. Then, configure the autobind mappings (in Section 20.12.4.2, "Configuring the Autobind Feature" ). 20.12.4.1. Overview of Autobind and LDAPI Inter-process communication (IPC) is a way for separate processes on a Unix machine or a network to communicate directly with each other. LDAPI is a way to run LDAP connections over these IPC connections, meaning that LDAP operations can run over Unix sockets. These connections are much faster and more secure than regular LDAP connections. The Directory Server uses these LDAPI connections to allow users to bind immediately to the Directory Server or to access the Directory Server using tools which support connections over Unix sockets. Autobind uses the uid:gid of the Unix user and maps that user to an entry in the Directory Server, then allows access for that user. Autobind allows mappings to three directory entries: User entries, if the Unix user matches one user entry Directory Manager if the Unix user is root or the super user defined in nsslapd-ldapimaprootdn Figure 20.1. Autobind Connection Path The special autobind users are entries beneath a special autobind suffix (outside the regular user subtree). The entries underneath are identified by their user and group ID numbers: If autobind is not enabled but LDAPI is, then Unix users are anonymously bound to the Directory Server, unless they provide other bind credentials. Note Autobind allows a client to send a request to the Directory Server without supplying a bind user name and password or using other SASL authentication mechanism. According to the LDAP standard, if bind information is not given with the request, the server processes the request as an anonymous bind. To be compliant with the standard, which requires some kind of bind information, any client that uses autobind should send the request with SASL/EXTERNAL. For more information on configuring SASL, see Section 9.10, "Setting up SASL Identity Mapping" . 20.12.4.2. Configuring the Autobind Feature Enabling the Autobind feature allows only anonymous access to Directory Server. However, you can configure to map Linux users to Directory Server entries and also to map the root user to the Directory Manager: Verify that the nsslapd-ldapiautobind parameter is enabled, which is the default: If nsslapd-ldapiautobind parameter is set to off , enable it: To map user entries, set, for example: nsslapd-ldapimaptoentries=on enables entry mapping. nsslapd-ldapiuidnumbertype= uidNumber sets the attribute in Directory Server that contains the Unix UID number. nsslapd-ldapigidnumbertype= gidNumber sets the attribute in Directory Server that contains the Unix GID number. nsslapd-ldapientrysearchbase= ou=People,dc=example,dc=com sets the DN where to search user entries. Optionally, to map the root user in Red Hat Enterprise Linux to the cn=Directory Manager account in Directory Server: Restart the instance:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-require-secure-binds=on", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-allow-anonymous-access=off", "dsctl instance_name restart", "ldapsearch -w \"\" -p 389 -h server.example.com -b \"dc=example,dc=com\" -s sub -x \"(objectclass=*)\" ldap_bind: Server is unwilling to perform (53) additional info: Unauthenticated binds are not allowed", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-allow-unauthenticated-binds=on", "gidNumber= gid +uidNumber uid , autobindsuffix", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ldapiautobind nsslapd-ldapiautobind: on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ldapiautobind=on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ldapimaptoentries=on nsslapd-ldapiuidnumbertype= uidNumber nsslapd-ldapigidnumbertype= gidNumber nsslapd-ldapientrysearchbase= ou=People,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ldapimaprootdn=\"cn=Directory Manager\"", "dsctl instance_name restart" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring-special-binds
Chapter 14. FlowMetric configuration parameters
Chapter 14. FlowMetric configuration parameters FlowMetric is the API allowing to create custom metrics from the collected flow logs. 14.1. FlowMetric [flows.netobserv.io/v1alpha1] Description FlowMetric is the API allowing to create custom metrics from the collected flow logs. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowMetricSpec defines the desired state of FlowMetric The provided API allows you to customize these metrics according to your needs. When adding new metrics or modifying existing labels, you must carefully monitor the memory usage of Prometheus workloads as this could potentially have a high impact. Cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric To check the cardinality of all Network Observability metrics, run as promql : count({ name =~"netobserv.*"}) by ( name ) . 14.1.1. .metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object 14.1.2. .spec Description FlowMetricSpec defines the desired state of FlowMetric The provided API allows you to customize these metrics according to your needs. When adding new metrics or modifying existing labels, you must carefully monitor the memory usage of Prometheus workloads as this could potentially have a high impact. Cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric To check the cardinality of all Network Observability metrics, run as promql : count({ name =~"netobserv.*"}) by ( name ) . Type object Required metricName type Property Type Description buckets array (string) A list of buckets to use when type is "Histogram". The list must be parsable as floats. When not set, Prometheus default buckets are used. charts array Charts configuration, for the OpenShift Container Platform Console in the administrator view, Dashboards menu. direction string Filter for ingress, egress or any direction flows. When set to Ingress , it is equivalent to adding the regular expression filter on FlowDirection : 0|2 . When set to Egress , it is equivalent to adding the regular expression filter on FlowDirection : 1|2 . divider string When nonzero, scale factor (divider) of the value. Metric value = Flow value / Divider. filters array filters is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must be used to eliminate duplicates: Duplicate != "true" and FlowDirection = "0" . Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . flatten array (string) flatten is a list of list-type fields that must be flattened, such as Interfaces and NetworkEvents. Flattened fields generate one metric per item in that field. For instance, when flattening Interfaces on a bytes counter, a flow having Interfaces [br-ex, ens5] increases one counter for br-ex and another for ens5 . labels array (string) labels is a list of fields that should be used as Prometheus labels, also known as dimensions. From choosing labels results the level of granularity of this metric, and the available aggregations at query time. It must be done carefully as it impacts the metric cardinality (cf https://rhobs-handbook.netlify.app/products/openshiftmonitoring/telemetry.md/#what-is-the-cardinality-of-a-metric ). In general, avoid setting very high cardinality labels such as IP or MAC addresses. "SrcK8S_OwnerName" or "DstK8S_OwnerName" should be preferred over "SrcK8S_Name" or "DstK8S_Name" as much as possible. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . metricName string Name of the metric. In Prometheus, it is automatically prefixed with "netobserv_". remap object (string) Set the remap property to use different names for the generated metric labels than the flow fields. Use the origin flow fields as keys, and the desired label names as values. type string Metric type: "Counter" or "Histogram". Use "Counter" for any value that increases over time and on which you can compute a rate, such as Bytes or Packets. Use "Histogram" for any value that must be sampled independently, such as latencies. valueField string valueField is the flow field that must be used as a value for this metric. This field must hold numeric values. Leave empty to count flows rather than a specific value per flow. Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . 14.1.3. .spec.charts Description Charts configuration, for the OpenShift Container Platform Console in the administrator view, Dashboards menu. Type array 14.1.4. .spec.charts[] Description Configures charts / dashboard generation associated to a metric Type object Required dashboardName queries title type Property Type Description dashboardName string Name of the containing dashboard. If this name does not refer to an existing dashboard, a new dashboard is created. queries array List of queries to be displayed on this chart. If type is SingleStat and multiple queries are provided, this chart is automatically expanded in several panels (one per query). sectionName string Name of the containing dashboard section. If this name does not refer to an existing section, a new section is created. If sectionName is omitted or empty, the chart is placed in the global top section. title string Title of the chart. type string Type of the chart. unit string Unit of this chart. Only a few units are currently supported. Leave empty to use generic number. 14.1.5. .spec.charts[].queries Description List of queries to be displayed on this chart. If type is SingleStat and multiple queries are provided, this chart is automatically expanded in several panels (one per query). Type array 14.1.6. .spec.charts[].queries[] Description Configures PromQL queries Type object Required legend promQL top Property Type Description legend string The query legend that applies to each timeseries represented in this chart. When multiple timeseries are displayed, you should set a legend that distinguishes each of them. It can be done with the following format: {{ Label }} . For example, if the promQL groups timeseries per label such as: sum(rate(USDMETRIC[2m])) by (Label1, Label2) , you might write as the legend: Label1={{ Label1 }}, Label2={{ Label2 }} . promQL string The promQL query to be run against Prometheus. If the chart type is SingleStat , this query should only return a single timeseries. For other types, a top 7 is displayed. You can use USDMETRIC to refer to the metric defined in this resource. For example: sum(rate(USDMETRIC[2m])) . To learn more about promQL , refer to the Prometheus documentation: https://prometheus.io/docs/prometheus/latest/querying/basics/ top integer Top N series to display per timestamp. Does not apply to SingleStat chart type. 14.1.7. .spec.filters Description filters is a list of fields and values used to restrict which flows are taken into account. Oftentimes, these filters must be used to eliminate duplicates: Duplicate != "true" and FlowDirection = "0" . Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html . Type array 14.1.8. .spec.filters[] Description Type object Required field matchType Property Type Description field string Name of the field to filter on matchType string Type of matching to apply value string Value to filter on. When matchType is Equal or NotEqual , you can use field injection with USD(SomeField) to refer to any other field of the flow.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/flowmetric-api
Chapter 3. Using patch templates for remediations
Chapter 3. Using patch templates for remediations The Red Hat Insights patch application supports scheduled patching cycles. Patch templates do not affect yum/dnf operations on the host, but they allow you to refine your patch status reporting in Red Hat Insights. You can use the templates to create remediation playbooks for simple patch cycles. 3.1. Using patch templates with remediations Patch templates can include one or more remediations that you want to apply to multiple systems. You can create a patch template to update a group of systems in a test environment, and use the same patch template to update systems in a production environment on a different day. For more information about creating and using patch templates with remediations, refer to System Patching Using Remediation Playbooks with FedRAMP . Note After you apply a patch template to the systems you assign, you will not see more recently published advisories that apply to those systems. Use Red Hat Hybrid Cloud Console notifications to ensure that you remain aware of newly published advisories that might affect your infrastructure. For more information about notifications in the Red Hat Hybrid Cloud Console, see Configuring notifications on the Red Hat Hybrid Cloud Console with FedRAMP .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide_with_fedramp/using-patch-templates-for-remediations_red-hat-insights-remediation-guide
Appendix A. KataConfig status messages
Appendix A. KataConfig status messages The following table displays the status messages for the KataConfig custom resource (CR) for a cluster with two worker nodes. Table A.1. KataConfig status messages Status Description Initial installation When a KataConfig CR is created and starts installing kata-remote on both workers, the following status is displayed for a few seconds. conditions: message: Performing initial installation of kata-remote on cluster reason: Installing status: 'True' type: InProgress kataNodes: nodeCount: 0 readyNodeCount: 0 Installing Within a few seconds the status changes. kataNodes: nodeCount: 2 readyNodeCount: 0 waitingToInstall: - worker-0 - worker-1 Installing (Worker-1 installation starting) For a short period of time, the status changes, signifying that one node has initiated the installation of kata-remote , while the other is in a waiting state. This is because only one node can be unavailable at any given time. The nodeCount remains at 2 because both nodes will eventually receive kata-remote , but the readyNodeCount is currently 0 as neither of them has reached that state yet. kataNodes: installing: - worker-1 nodeCount: 2 readyNodeCount: 0 waitingToInstall: - worker-0 Installing (Worker-1 installed, worker-0 installation started) After some time, worker-1 will complete its installation, causing a change in the status. The readyNodeCount is updated to 1, indicating that worker-1 is now prepared to execute kata-remote workloads. You cannot schedule or run kata-remote workloads until the runtime class is created at the end of the installation process. kataNodes: installed: - worker-1 installing: - worker-0 nodeCount: 2 readyNodeCount: 1 Installed When installed, both workers are listed as installed, and the InProgress condition transitions to False without specifying a reason, indicating the successful installation of kata-remote on the cluster. conditions: message: "" reason: "" status: 'False' type: InProgress kataNodes: installed: - worker-0 - worker-1 nodeCount: 2 readyNodeCount: 2 Status Description Initial uninstall If kata-remote is installed on both workers, and you delete the KataConfig to remove kata-remote from the cluster, both workers briefly enter a waiting state for a few seconds. conditions: message: Removing kata-remote from cluster reason: Uninstalling status: 'True' type: InProgress kataNodes: nodeCount: 0 readyNodeCount: 0 waitingToUninstall: - worker-0 - worker-1 Uninstalling After a few seconds, one of the workers starts uninstalling. kataNodes: nodeCount: 0 readyNodeCount: 0 uninstalling: - worker-1 waitingToUninstall: - worker-0 Uninstalling Worker-1 finishes and worker-0 starts uninstalling. kataNodes: nodeCount: 0 readyNodeCount: 0 uninstalling: - worker-0 Note The reason field can also report the following causes: Failed : This is reported if the node cannot finish its transition. The status reports True and the message is Node <node_name> Degraded: <error_message_from_the_node> . BlockedByExistingKataPods : This is reported if there are pods running on a cluster that use the kata-remote runtime while kata-remote is being uninstalled. The status field is False and the message is Existing pods using "kata-remote" RuntimeClass found. Please delete the pods manually for KataConfig deletion to proceed . There could also be a technical error message reported like Failed to list kata pods: <error_message> if communication with the cluster control plane fails.
[ "conditions: message: Performing initial installation of kata-remote on cluster reason: Installing status: 'True' type: InProgress kataNodes: nodeCount: 0 readyNodeCount: 0", "kataNodes: nodeCount: 2 readyNodeCount: 0 waitingToInstall: - worker-0 - worker-1", "kataNodes: installing: - worker-1 nodeCount: 2 readyNodeCount: 0 waitingToInstall: - worker-0", "kataNodes: installed: - worker-1 installing: - worker-0 nodeCount: 2 readyNodeCount: 1", "conditions: message: \"\" reason: \"\" status: 'False' type: InProgress kataNodes: installed: - worker-0 - worker-1 nodeCount: 2 readyNodeCount: 2", "conditions: message: Removing kata-remote from cluster reason: Uninstalling status: 'True' type: InProgress kataNodes: nodeCount: 0 readyNodeCount: 0 waitingToUninstall: - worker-0 - worker-1", "kataNodes: nodeCount: 0 readyNodeCount: 0 uninstalling: - worker-1 waitingToUninstall: - worker-0", "kataNodes: nodeCount: 0 readyNodeCount: 0 uninstalling: - worker-0" ]
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/kataconfig-status-messages
Chapter 3. Enabling the OpenID Connect authentication provider
Chapter 3. Enabling the OpenID Connect authentication provider Red Hat Developer Hub uses the OpenID Connect (OIDC) authentication provider to authenticate with third-party services that support the OIDC protocol. 3.1. Overview of using the OIDC authentication provider in Developer Hub You can configure the OIDC authentication provider in Developer Hub by updating your app-config.yaml file under the root auth configuration. For example: auth: environment: production # Providing an auth.session.secret will enable session support in the auth-backend session: secret: USD{SESSION_SECRET} providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} prompt: USD{AUTH_OIDC_PROMPT} # Recommended to use auto ## Uncomment for additional configuration options # callbackUrl: USD{AUTH_OIDC_CALLBACK_URL} # tokenEndpointAuthMethod: USD{AUTH_OIDC_TOKEN_ENDPOINT_METHOD} # tokenSignedResponseAlg: USD{AUTH_OIDC_SIGNED_RESPONSE_ALG} # scope: USD{AUTH_OIDC_SCOPE} ## Declarative resolvers to override the default resolver: `emailLocalPartMatchingUserEntityName` ## The authentication provider tries each sign-in resolver until it succeeds, and fails if none succeed. Uncomment the resolvers that you want to use. # signIn: # resolvers: # - resolver: preferredUsernameMatchingUserEntityName # - resolver: emailMatchingUserEntityProfileEmail # - resolver: emailLocalPartMatchingUserEntityName signInPage: oidc 3.2. Configuring Keycloak with the OIDC authentication provider Red Hat Developer Hub includes an OIDC authentication provider that can authenticate users by using Keycloak. Important The user that you create in Keycloak must also be available in the Developer Hub catalog. Procedure In Keycloak, create a new realm, for example RHDH . Add a new user. Username Username for the user, for example: rhdhuser Email Email address of the user. First name First name of the user. Last name Last name of the user. Email verified Toggle to On . Click Create . Navigate to the Credentials tab. Click Set password . Enter the Password for the user account and toggle Temporary to Off . Create a new Client ID, for example, RHDH . Client authentication Toggle to On . Valid redirect URIs Set to the OIDC handler URL, for example, https://<RHDH_URL>/api/auth/oidc/handler/frame . Navigate to the Credentials tab and copy the Client secret . Save the Client ID and the Client Secret for the step. In Developer Hub, add your Keycloak credentials in your Developer Hub secrets. Edit your Developer Hub secrets, such as secrets-rhdh. Add the following key/value pairs: AUTH_KEYCLOAK_CLIENT_ID Enter the Client ID that you generated in Keycloak, such as RHDH . AUTH_KEYCLOAK_CLIENT_SECRET Enter the Client Secret that you generated in Keycloak. Set up the OIDC authentication provider in your Developer Hub custom configuration. Edit your custom Developer Hub ConfigMap, such as app-config-rhdh . In the app-config-rhdh.yaml content, add the oidc provider configuration under the root auth configuration, and enable the oidc provider for sign-in: app-config-rhdh.yaml fragment auth: environment: production providers: oidc: production: clientId: USD{AUTH_KEYCLOAK_CLIENT_ID} clientSecret: USD{AUTH_KEYCLOAK_CLIENT_SECRET} metadataUrl: USD{KEYCLOAK_BASE_URL}/auth/realms/USD{KEYCLOAK_REALM} prompt: USD{KEYCLOAK_PROMPT} # recommended to use auto Uncomment for additional configuration options #callbackUrl: USD{KEYCLOAK_CALLBACK_URL} #tokenEndpointAuthMethod: USD{KEYCLOAK_TOKEN_ENDPOINT_METHOD} #tokenSignedResponseAlg: USD{KEYCLOAK_SIGNED_RESPONSE_ALG} #scope: USD{KEYCLOAK_SCOPE} If you are using the keycloak-backend plugin, use the preferredUsernameMatchingUserEntityName resolver to avoid a login error. signIn: resolvers: - resolver: preferredUsernameMatchingUserEntityName signInPage: oidc Verification Restart your backstage-developer-hub application to apply the changes. Your Developer Hub sign-in page displays Sign in using OIDC . 3.3. Migrating from OAuth2 Proxy with Keycloak to OIDC in Developer Hub If you are using OAuth2 Proxy as an authentication provider with Keycloak, and you want to migrate to OIDC, you can update your authentication provider configuration to use OIDC. Procedure In Keycloak, update the valid redirect URI to https://<rhdh_url>/api/auth/oidc/handler/frame . Make sure to replace <rhdh_url> with your Developer Hub application URL, such as, my.rhdh.example.com . Replace the oauth2Proxy configuration values in the auth section of your app-config.yaml file with the oidc configuration values. Update the signInPage configuration value from oauth2Proxy to oidc . The following example shows the auth.providers and signInPage configuration for oauth2Proxy prior to migrating the authentication provider to oidc : auth: environment: production session: secret: USD{SESSION_SECRET} providers: oauth2Proxy: {} signInPage: oauth2Proxy The following example shows the auth.providers and signInPage configuration after migrating the authentication provider to oidc : auth: environment: production session: secret: USD{SESSION_SECRET} providers: oidc: production: metadataUrl: USD{KEYCLOAK_METADATA_URL} clientId: USD{KEYCLOAK_CLIENT_ID} clientSecret: USD{KEYCLOAK_CLIENT_SECRET} prompt: USD{KEYCLOAK_PROMPT} # recommended to use auto signInPage: oidc Remove the OAuth2 Proxy sidecar container and update the upstream.service section of your Helm chart's values.yaml file as follows: service.ports.backend : 7007 service.ports.targetPort : backend The following example shows the service configuration for oauth2Proxy prior to migrating the authentication provider to oidc : service: ports: name: http-backend backend: 4180 targetPort: oauth2Proxy The following example shows the service configuration after migrating the authentication provider to oidc : service: ports: name: http-backend backend: 7007 targetPort: backend Upgrade the Developer Hub Helm chart.
[ "auth: environment: production # Providing an auth.session.secret will enable session support in the auth-backend session: secret: USD{SESSION_SECRET} providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} prompt: USD{AUTH_OIDC_PROMPT} # Recommended to use auto ## Uncomment for additional configuration options # callbackUrl: USD{AUTH_OIDC_CALLBACK_URL} # tokenEndpointAuthMethod: USD{AUTH_OIDC_TOKEN_ENDPOINT_METHOD} # tokenSignedResponseAlg: USD{AUTH_OIDC_SIGNED_RESPONSE_ALG} # scope: USD{AUTH_OIDC_SCOPE} ## Declarative resolvers to override the default resolver: `emailLocalPartMatchingUserEntityName` ## The authentication provider tries each sign-in resolver until it succeeds, and fails if none succeed. Uncomment the resolvers that you want to use. # signIn: # resolvers: # - resolver: preferredUsernameMatchingUserEntityName # - resolver: emailMatchingUserEntityProfileEmail # - resolver: emailLocalPartMatchingUserEntityName signInPage: oidc", "auth: environment: production providers: oidc: production: clientId: USD{AUTH_KEYCLOAK_CLIENT_ID} clientSecret: USD{AUTH_KEYCLOAK_CLIENT_SECRET} metadataUrl: USD{KEYCLOAK_BASE_URL}/auth/realms/USD{KEYCLOAK_REALM} prompt: USD{KEYCLOAK_PROMPT} # recommended to use auto Uncomment for additional configuration options #callbackUrl: USD{KEYCLOAK_CALLBACK_URL} #tokenEndpointAuthMethod: USD{KEYCLOAK_TOKEN_ENDPOINT_METHOD} #tokenSignedResponseAlg: USD{KEYCLOAK_SIGNED_RESPONSE_ALG} #scope: USD{KEYCLOAK_SCOPE} If you are using the keycloak-backend plugin, use the preferredUsernameMatchingUserEntityName resolver to avoid a login error. signIn: resolvers: - resolver: preferredUsernameMatchingUserEntityName signInPage: oidc", "auth: environment: production session: secret: USD{SESSION_SECRET} providers: oauth2Proxy: {} signInPage: oauth2Proxy", "auth: environment: production session: secret: USD{SESSION_SECRET} providers: oidc: production: metadataUrl: USD{KEYCLOAK_METADATA_URL} clientId: USD{KEYCLOAK_CLIENT_ID} clientSecret: USD{KEYCLOAK_CLIENT_SECRET} prompt: USD{KEYCLOAK_PROMPT} # recommended to use auto signInPage: oidc", "service: ports: name: http-backend backend: 4180 targetPort: oauth2Proxy", "service: ports: name: http-backend backend: 7007 targetPort: backend" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/authentication/assembly-auth-provider-oidc
Chapter 2. Testing Camel K locally and on cloud infrastructure
Chapter 2. Testing Camel K locally and on cloud infrastructure This chapter describes the steps to test a Camel K integration with YAKS, both locally and on the cloud infrastructure (Kubernetes platform). Section 2.1, "Testing Camel K with YAKS" Section 2.1.6, "Apache Camel K steps" Section 2.1.7, "Kamelet steps" Section 2.1.8, "Pipe steps" 2.1. Testing Camel K with YAKS 2.1.1. What is YAKS? YAKS is an Open Source test automation platform that leverages Behavior Driven Development concepts for running tests locally and on Cloud infrastructure (For example: Kubernetes or OpenShift ). This means that the testing tool is able to run your tests both as local tests and natively on Kubernetes. The framework is specifically designed to verify Serverless and Microservice applications and aims for integration testing with the application under test, up and running in a production-like environment. A typical YAKS test uses the same infrastructure as the application under test and exchanges data/events over different messaging transports (For example: Http REST, Knative eventing, Kafka, JMS and many more). As YAKS itself is written in Java the runtime uses a Java virtual machine with build tools such as Maven and integrates with well known Java testing frameworks such as JUnit , Cucumber and Citrus to run the tests. 2.1.2. Understanding the Camel K example Here is a sample Camel K integration that we like to test in the following. The integration exposes a Http service to the user. The service accepts client Http POST requests that add fruit model objects. The Camel K route applies content based routing to store the fruits in different AWS S3 buckets. In the test scenario YAKS is going to invoke the Camel K service and verify that the message content has been sent to the right AWS S3 bucket. Here is a sample fruit model object that is subject to be stored in AWS S3: { "id": 1000, "name": "Pineapple", "category":{ "id": "1", "name":"tropical" }, "nutrition":{ "calories": 50, "sugar": 9 }, "status": "AVAILABLE", "price": 1.59, "tags": ["sweet"] } Here is the Camel K integration route: from('platform-http:/fruits') .log('received fruit USD{body}') .unmarshal().json() .removeHeaders("*") .setHeader("CamelAwsS3Key", constant("fruit.json")) .choice() .when().simple('USD{body[nutrition][sugar]} <= 5') .setHeader("CamelAwsS3BucketName", constant("low-sugar")) .when().simple('USD{body[nutrition][sugar]} > 5 && USD{body[nutrition][sugar]} <= 10') .setHeader("CamelAwsS3BucketName", constant("medium-sugar")) .otherwise() .setHeader("CamelAwsS3BucketName", constant("high-sugar")) .end() .marshal().json() .log('sending USD{body}') .to("aws2-s3://noop?USDparameters") The route uses content based routing EAP based on the nutrition sugar rating of a given fruit in order to send the fruits to different AWS S3 buckets (low-sugar, medium-sugar, high-sugar). In the following the test case for this integration needs to invoke the exposed service with different fruits and verify its outcome on AWS S3. 2.1.3. How to test locally with YAKS In the beginning, let us just write the test and run it locally. For now, we do not care how to deploy the application under test in the Cloud infrastructure as everything is running on the local machine using JBang . JBang is a fantastic way to just start coding and running Java code and also Camel K integrations (also, see this former blog post about how JBang integrates with Camel K ). YAKS as a framework brings a set of ready-to-use domain specific languages (XML, YAML, Groovy, BDD Cucumber steps) for writing tests to verify your deployed services. This post uses the Behavior Driven Development integration via Cucumber. So the YAKS test is a single feature file that uses BDD Gherkin syntax like this: Feature: Camel K Fruit Store Background: Given URL: http://localhost:8080 Scenario: Create infrastructure # Start AWS S3 container Given Enable service S3 Given start LocalStack container # Create Camel K integration Given Camel K integration property file aws-s3-credentials.properties When load Camel K integration fruit-service.groovy Then Camel K integration fruit-service should print Started route1 (platform-http:///fruits) Scenario: Verify fruit service # Invoke Camel K service Given HTTP request body: yaks:readFile('pineapple.json') And HTTP request header Content-Type="application/json" When send POST /fruits Then receive HTTP 200 OK # Verify uploaded S3 file Given New global Camel context Given load to Camel registry amazonS3Client.groovy Given Camel exchange message header CamelAwsS3Key="fruit.json" Given receive Camel exchange from("aws2-s3://medium-sugar?amazonS3Client=#amazonS3Client&deleteAfterRead=true") with body: yaks:readFile('pineapple.json') Let us walk through the test step by step. Firstly, the feature file uses the usual Given-When-Then BDD syntax to give context, describe the actions and verify the outcome. Each step calls a specific YAKS action that is provided out of the box by the framework. The user is able to choose from a huge set of steps that automatically perform actions like sending/receiving Http requests/responses, starting Testcontainers , running Camel routes, connecting to a database, publishing events on Kafka or Knative brokers and many more. In the first scenario the test automatically prepares some required infrastructure. The YAKS test starts a Localstack Testcontainer to have an AWS S3 test instance running ( Given start LocalStack container ). Then the test loads and starts the Camel K integration under test ( When load Camel K integration fruit-service.groovy ) and waits for it to properly start. In local testing this step starts the Camel K integration using JBang. Later the post will also run the test in a Kubernetes environment. Now the infrastructure is up and running and the test is able to load the fruit model object as Http request body ( Given HTTP request body: yaks:readFile('pineapple.json') ) and invoke the Camel K service ( When send POST /fruits ). The test waits for the Http response and verifies its 200 OK status. In the last step, the test verifies that the fruit object has been added to the right AWS S3 bucket (medium-sugar). As YAKS itself is not able to connect to AWS S3 the test uses Apache Camel for this step. The test creates a Camel context, loads a AWS client and connects to AWS S3 with a temporary Camel route ( Given receive Camel exchange from("aws2-s3://medium-sugar?amazonS3Client=#amazonS3Client&deleteAfterRead=true") ). With this Apache Camel integration YAKS is able to use the complete 300+ Camel components for sending and receiving messages to various messaging transports. The Camel exchange body must be the same fruit model object ( yaks:readFile('pineapple.json' ) as posted in the initial Http request. YAKS uses the powerful message payload validation capabilities provided by Citrus for this message content verification. The validation is able to compare message contents of type XML, Json, plaintext and many more. This completes the test case. You can now run this test with Cucumber and JUnit for instance. The easiest way though to directly run tests with YAKS is to use the YAKS command line client . You need not set up a whole project with Maven dependencies and so on. Just write the test file and run with: USD yaks run fruit-service.feature --local You should see some log output like this: 2.1.4. Running YAKS in the Cloud YAKS is able to run tests both locally and as part of a Kubernetes cluster. When running tests on Cloud infrastructure YAKS leverages the Operator SDK and provides a specific operator to manage the test case resources on the cluster. Each time you declare a test in the form of a custom resource , the YAKS operator automatically takes care of preparing the proper runtime in order to execute the test as a Kubernetes Pod. Why would you want to run tests as Cloud-native resources on the Kubernetes platform? Kubernetes has become a standard target platform for Serverless and Microservices architectures. Writing a Serverless or Microservices application for instance with Camel K is very declarative. As a developer you just write the Camel route and run it as an integration via the Camel K operator directly on the cluster. The declarative approach as well as the nature of Serverless applications make us rely on a given runtime infrastructure, and it is essential to verify the applications also on that infrastructure. So it is only natural to also move the verifying tests into this very same Cloud infrastructure. This is why YAKS also brings your tests to the Cloud infrastructure for integration and end-to-end testing. So here is how it works. You are able to run the very same YAKS test that has been run locally also as a Pod in Kubernetes. YAKS provides a Kubernetes operator and a set of CRDs (custom resources) that we need to install on the cluster. The best way to install YAKS is to use the OperatorHub or the yaks CLI tools that you can download from the YAKS GitHub release pages . With the yaks-client binary simply run this install command: USD yaks install This command prepares your Kubernetes cluster for running tests with YAKS. It will take care of installing the YAKS custom resource definitions, setting up role permissions and creating the YAKS operator in a global operator namespace. Important You need to be a cluster admin to install custom resource definitions. The operation needs to be done only once for the entire cluster. Now that the YAKS operator is up and running you can run the very same test from local testing also on the Cloud infrastructure. The only thing that needs to be done is to adjust the Http endpoint URL of the Camel K integration from http://localhost:8080 to http://fruit-service.USD{YAKS_NAMESPACE } USD yaks run fruit-service.feature Note We have just skipped the --local CLI option. Instead of using local JBang tooling to run the test locally now the YAKS CLI connects to the Kubernetes cluster to create the test as a custom resource. From there the YAKS operator takes over preparing the test runtime and running the test as a Pod. The test did prepare some infrastructure, in particular the Camel K integration and the AWS S3 Localstack Testcontainer instance. How does that work inside Kubernetes? YAKS completely takes care of it. The Camel K integration is run with the Camel K operator running on the same Kubernetes cluster. And the Testcontainer AWS S3 instance is automatically run as a Pod in Kubernetes. Even connection settings are handled automatically. It just works! You will see some similar test log output when running the test remotely and the test performs its actions and its validation exactly the same as locally. You can also review the test Pod outcome with: USD yaks ls This is an example output you should get: 2.1.5. Demostration The whole demo code is available on this GitHub repository . It also shows how to integrate the tests in a GitHub CI actions workflow , so you can run the tests automatically with every code change. 2.1.6. Apache Camel K steps Apache Camel K is a lightweight integration framework built from Apache Camel that runs natively on Kubernetes and is specifically designed for serverless and microservice architectures. Users of Camel K can instantly run integration code written in Camel DSL on their preferred cloud (Kubernetes or OpenShift). If the subject under test is a Camel K integration, you can leverage the YAKS Camel K bindings that provide useful steps for managing Camel K integrations. Working with Camel K integrations Given create Camel K integration helloworld.groovy """ from('timer:tick?period=1000') .setBody().constant('Hello world from Camel K!') .to('log:info') """ Given Camel K integration helloworld is running Then Camel K integration helloworld should print Hello world from Camel K! The YAKS framework provides the Camel K extension library by default. You can create a new Camel K integration and check the status of the integration (e.g. running). The following sections describe the available Camel K steps in detail. 2.1.6.1. API version The default Camel K API version used to create and manage resources is v1 . You can overwrite this version with a environment variable set on the YAKS configuration. Overwrite Camel K API version YAKS_CAMELK_API_VERSION=v1alpha1 This sets the Camel K API version for all operations. 2.1.6.2. Create Camel K integrations @Given("^(?:create|new) Camel K integration {name}.{type}USD") Given create Camel K integration {name}.groovy """ <<Camel DSL>> """ Creates a new Camel K integration with specified route DSL. The integration is automatically started and can be referenced with its {name} in other steps. @Given("^(?:create|new) Camel K integration {name}.{type} with configuration:USD") Given create Camel K integration {name}.groovy with configuration: | dependencies | mvn:org.foo:foo:1.0,mvn:org.bar:bar:0.9 | | traits | quarkus.native=true,quarkus.enabled=true,route.enabled=true | | properties | foo.key=value,bar.key=value | | source | <<Camel DSL>> | You can add optional configurations to the Camel K integration such as dependencies, traits and properties. Source The route DSL as source for the Camel K integration. Dependencies List of Maven coordinates that will be added to the integration runtime as a library. Traits List of trait configuration that will be added to the integration spec. Each trait configuration value must be in the format traitname.key=value . Properties List of property bindings added to the integration. Each value must be in the format key=value . 2.1.6.3. Load Camel K integrations @Given("^load Camel K integration {name}.{type}USD") Given load Camel K integration {name}.groovy Loads the file {name}.groovy as a Camel K integration. 2.1.6.4. Delete Camel K integrations @Given("^delete Camel K integration {name}USD") Given delete Camel K integration {name} Deletes the Camel K integration with given {name} . 2.1.6.5. Verify integration state A Camel K integration is run in a normal Kubernetes pod. The pod has a state and is in a phase (e.g. running, stopped). You can verify the state with an expectation. @Given("^Camel K integration {name} is running/stoppedUSD") Given Camel K integration {name} is running Checks that the Camel K integration with given {name} is in state running and that the number of replicas is > 0. The step polls the state of the integration for a given amount of attempts with a given delay between attempts. You can adjust the polling settings with: @Given Camel K resource polling configuration Given Camel K resource polling configuration | maxAttempts | 10 | | delayBetweenAttempts | 1000 | 2.1.6.6. Watch Camel K integration logs @Given("^Camel K integration {name} should print (. )USD") * Given Camel K integration {name} should print {log-message} Watches the log output of a Camel K integration and waits for given {log-message} to be present in the logs. The step polls the logs for a given amount of time. You can adjust the polling configuration with: @Given Camel K resource polling configuration Given Camel K resource polling configuration | maxAttempts | 10 | | delayBetweenAttempts | 1000 | You can also wait for a log message to not be present in the output. Just use this step: @Given("^Camel K integration {name} should not print (. )USD") * Given Camel K integration {name} should not print {log-message} You can enable YAKS to print the logs to the test log output while the test is running. The logging can be enabled/disabled with environment variable settings `YAKS_CAMELK_PRINT_POD_LOGS=true/false To see the log output in the test console logging, you must set the logging level to INFO (for example: in yaks-config.yaml ). config: runtime: settings: loggers: - name: INTEGRATION_STATUS level: INFO - name: INTEGRATION_LOGS level: INFO 2.1.6.7. Manage Camel K resources The Camel K steps are able to create resources such as integrations. By default these resources get removed automatically after the test scenario. The auto removal of Camel K resources can be turned off with the following step. @Given("^Disable auto removal of Camel K resourcesUSD") Given Disable auto removal of Camel K resources Usually this step is a Background step for all scenarios in a feature file. This way multiple scenarios can work on the very same Camel K resources and share integrations. There is also a separate step to explicitly enable the auto removal. @Given("^Enable auto removal of Camel K resourcesUSD") Given Enable auto removal of Camel K resources By default, all Camel K resources are automatically removed after each scenario. 2.1.6.7.1. Enable and disable auto removal using environment variable You can enable/disable the auto removal via environment variable settings. The environment variable is called YAKS_CAMELK_AUTO_REMOVE_RESOURCES=true/false and must be set on the yaks-config.yaml test configuration for all tests in the test suite. There is also a system property called yaks.camelk.auto.remove.resources=true/false that you must set in the yaks.properties file. 2.1.7. Kamelet steps Kamelets are a form of predefined Camel route templates implemented in Camel K. Usually a Kamelet encapsulates a certain functionality (e.g. send messages to an endpoint). Additionaly Kamelets define a set of properties that the user needs to provide when using the Kamelet. YAKS provides steps to manage Kamelets. 2.1.7.1. API version The default Kamelet API version used to create and manage resources is v1 . (You can adjust the version, for example: to v1alpha1 with the mentioned environment variable.) You can overwrite this version with a environment variable set on the YAKS configuration. Overwrite Kamelet API version YAKS_KAMELET_API_VERSION=v1 This sets the Kamelet API version for all operations. 2.1.7.2. Create Kamelets A Kamelets defines a set of properties and specifications that you can set with separate steps in your feature. Each of the following steps set a specific property on the Kamelet. Once you are done with the Kamelet specification you are able to create the Kamelet in the current namespace. Firstly, you can specify the media type of the available slots (in, out and error) in the Kamelet. @Given("^Kamelet dataType (in|out|error)(?:=| is )\"{mediaType}\"USD") Given Kamelet dataType in="{mediaType}" The Kamelet can use a title that you set with the following step. @Given("^Kamelet title \"{title}\"USD") Given Kamelet title "{title}" Each template uses an endpoint uri and defines a set of steps that get called when the Kamelet processing takes place. The following step defines a template on the current Kamelet. @Given("^Kamelet templateUSD") Given Kamelet template """ from: uri: timer:tick parameters: period: "#property:period" steps: - set-body: constant: "{{message}}" - to: "kamelet:sink" """ The template uses two properties {{message}} and {{period}} . These placeholders need to be provided by the Kamelet user. The step defines the property message in detail: @Given("^Kamelet property definition {name}USD") Given Kamelet property definition message | type | string | | required | true | | example | "hello world" | | default | "hello" | The property receives specification such as type, required and an example. In addition to the example you can set a default value for the property. In addition to using a template on the Kamelet you can add multiple sources to the Kamelet. @Given("^Kamelet source {name}.{language}USD") Given Kamelet source timer.yaml """ <<YAML>> """ The steps defined all properties and Kamelet specifications so now you are ready to create the Kamelet in the current namespace. @Given("^(?:create|new) Kamelet {name}USD") Given create Kamelet {name} The Kamelet requires a unique name . Creating a Kamelet means that a new custom resource of type Kamelet is created. As a variation you can also set the template when creating the Kamelet. @Given("^(?:create|new) Kamelet {name} with template") Given create Kamelet {name} with template """ <<YAML>> """ This creates the Kamelet in the current namespace. 2.1.7.3. Load Kamelets You can create new Kamelets by giving the complete specification in an external YAML file. The step loads the file content and creates the Kamelet in the current namespace. @Given("^load Kamelet {name}.kamelet.yamlUSD") Given load Kamelet {name}.kamelet.yaml Loads the file {name}.kamelet.yaml as a Kamelet. At the moment only kamelet.yaml source file extension is supported. 2.1.7.4. Delete Kamelets @Given("^delete Kamelet {name}USD") Given delete Kamelet {name} Deletes the Kamelet with given {name} from the current namespace. 2.1.7.5. Verify Kamelet is available @Given("^Kamelet {name} is availableUSDUSD") Given Kamelet {name} is availableUSD Verifies that the Kamelet custom resource is available in the current namespace. 2.1.8. Pipe steps You can bind a Kamelet as a source to a sink. This concept is described with Pipes. YAKS as a framework is able to create and verify Pipes in combination with Kamelets. Note Pipes are available since API version v1 in Camel K. YAKS also supports KameletBinding resources that represent the v1alpha1 equivalent to Pipes. So in case you need to work with KameletBindings you need to explicitly set the Kamelet API version to v1alpha1 (For example; via environment variable settings YAKS_KAMELET_API_VERSION ). 2.1.8.1. Create Pipes YAKS provides multiple steps that bind a Kamelet source to a sink. The pipe is going to forward all messages processed by the source to the sink. 2.1.8.1.1. Bind to Http URI @Given("^bind Kamelet {kamelet} to uri {uri}USD") Given bind Kamelet {name} to uri {uri} This defines the Pipe with the given Kamelet name as source to the given Http URI as a sink. 2.1.8.1.2. Bind to Kafka topic You can bind a Kamelet source to a Kafka topic sink. All messages will be forwarded to the topic. @Given("^bind Kamelet {kamelet} to Kafka topic {topic}USD") Given bind Kamelet {kamelet} to Kafka topic {topic} 2.1.8.1.3. Bind to Knative channel Channels are part of the eventing in Knative. Similar to topics in Kafka the channels hold messages for subscribers. @Given("^bind Kamelet {kamelet} to Knative channel {channel}USD") Given bind Kamelet {kamelet} to Knative channel {channel} Channels can be backed with different implementations. You can explicitly set the channel type to use in the pipe. @Given("^bind Kamelet {kamelet} to Knative channel {channel} of kind {kind}USD") Given bind Kamelet {kamelet} to Knative channel {channel} of kind {kind} 2.1.8.1.4. Specify source/sink properties The Pipe may need to specify properties for source and sink. These properties are defined in the Kamelet source specifications for instance. You can set properties with values in the following step: @Given("^Pipe source propertiesUSD") Given Pipe source properties | {property} | {value} | The Kamelet source that we have used in the examples above has defined a property message . So you can set the property on the pipe as follows. Given Pipe source properties | message | "Hello world" | The same approach applies to sink properties. @Given("^Pipe sink propertiesUSD") Given Pipe sink properties | {property} | {value} | 2.1.8.1.5. Create the pipe The steps have defined source and sink of the Pipe specification. Now you are ready to create the Pipe in the current namespace. @Given("^(?:create|new) Pipe {name}USD") Given create Pipe {name} The Pipe receives a unique name and uses the previously specified source and sink. Creating a Pipe means that a new custom resource of type Pipe is created in the current namespace. 2.1.8.2. Load Pipes You can create new Pipes by giving the complete specification in an external YAML file. The step loads the file content and creates the Pipe in the current namespace. @Given("^load Pipe {name}.yamlUSD") Given load Pipe {name}.yaml Loads the file {name}.yaml as a Pipe. At the moment YAKS only supports .yaml source files. 2.1.8.3. Delete Pipes @Given("^delete Pipe {name}USD") Given delete Pipe {name} Deletes the Pipe with given {name} from the current namespace. 2.1.8.4. Verify Pipe is available @Given("^Pipe {name} is availableUSDUSD") Given Pipe {name} is availableUSD Verifies that the Pipe custom resource is available in the current namespace. 2.1.8.5. Manage Kamelet and Pipe resources The described steps are able to create Kamelet resources on the current Kubernetes namespace. By default these resources get removed automatically after the test scenario. The auto removal of Kamelet resources can be turned off with the following step. @Given("^Disable auto removal of Kamelet resourcesUSD") Given Disable auto removal of Kamelet resources Usually this step is a Background step for all scenarios in a feature file. This way multiple scenarios can work on the very same Kamelet resources and share integrations. There is also a separate step to explicitly enable the auto removal. @Given("^Enable auto removal of Kamelet resourcesUSD") Given Enable auto removal of Kamelet resources By default, all Kamelet resources are automatically removed after each scenario. 2.1.8.5.1. Enable and disable auto removal using environment variable You can enable/disable the auto removal via environment variable settings. The environment variable is called YAKS_CAMELK_AUTO_REMOVE_RESOURCES=true/false and must be set on the yaks-config.yaml test configuration for all tests in the test suite. There is also a system property called yaks.camelk.auto.remove.resources=true/false that you must set in the yaks.properties file.
[ "{ \"id\": 1000, \"name\": \"Pineapple\", \"category\":{ \"id\": \"1\", \"name\":\"tropical\" }, \"nutrition\":{ \"calories\": 50, \"sugar\": 9 }, \"status\": \"AVAILABLE\", \"price\": 1.59, \"tags\": [\"sweet\"] }", "from('platform-http:/fruits') .log('received fruit USD{body}') .unmarshal().json() .removeHeaders(\"*\") .setHeader(\"CamelAwsS3Key\", constant(\"fruit.json\")) .choice() .when().simple('USD{body[nutrition][sugar]} <= 5') .setHeader(\"CamelAwsS3BucketName\", constant(\"low-sugar\")) .when().simple('USD{body[nutrition][sugar]} > 5 && USD{body[nutrition][sugar]} <= 10') .setHeader(\"CamelAwsS3BucketName\", constant(\"medium-sugar\")) .otherwise() .setHeader(\"CamelAwsS3BucketName\", constant(\"high-sugar\")) .end() .marshal().json() .log('sending USD{body}') .to(\"aws2-s3://noop?USDparameters\")", "Feature: Camel K Fruit Store Background: Given URL: http://localhost:8080 Scenario: Create infrastructure # Start AWS S3 container Given Enable service S3 Given start LocalStack container # Create Camel K integration Given Camel K integration property file aws-s3-credentials.properties When load Camel K integration fruit-service.groovy Then Camel K integration fruit-service should print Started route1 (platform-http:///fruits) Scenario: Verify fruit service # Invoke Camel K service Given HTTP request body: yaks:readFile('pineapple.json') And HTTP request header Content-Type=\"application/json\" When send POST /fruits Then receive HTTP 200 OK # Verify uploaded S3 file Given New global Camel context Given load to Camel registry amazonS3Client.groovy Given Camel exchange message header CamelAwsS3Key=\"fruit.json\" Given receive Camel exchange from(\"aws2-s3://medium-sugar?amazonS3Client=#amazonS3Client&deleteAfterRead=true\") with body: yaks:readFile('pineapple.json')", "yaks run fruit-service.feature --local", "INFO | INFO | ------------------------------------------------------------------------ INFO | .__ __ INFO | ____ |__|/ |________ __ __ ______ INFO | _/ ___\\| \\ __\\_ __ \\ | \\/ ___/ INFO | \\ \\___| || | | | \\/ | /\\___ INFO | \\___ >__||__| |__| |____//____ > INFO | \\/ \\/ INFO | INFO | C I T R U S T E S T S 3.4.0 INFO | INFO | ------------------------------------------------------------------------ INFO | Scenario: Create infrastructure # fruit-service.feature:6 Given URL: http://localhost:8080 Given Enable service S3 [...] Scenario: Verify fruit service # fruit-service.feature:20 Given URL: http://localhost:8080 Given HTTP request body: yaks:readFile('pineapple.json') [...] Scenario: Remove infrastructure # fruit-service.feature:31 Given URL: http://localhost:8080 Given delete Camel K integration fruit-service Given stop LocalStack container 3 Scenarios (3 passed) 18 Steps (18 passed) 0m18,051s INFO | ------------------------------------------------------------------------ INFO | INFO | CITRUS TEST RESULTS INFO | INFO | Create infrastructure .......................................... SUCCESS INFO | Verify fruit service ........................................... SUCCESS INFO | Remove infrastructure .......................................... SUCCESS INFO | INFO | TOTAL: 3 INFO | FAILED: 0 (0.0%) INFO | SUCCESS: 3 (100.0%) INFO | INFO | ------------------------------------------------------------------------ 3 Scenarios (3 passed) 18 Steps (18 passed) 0m18,051s Test results: Total: 0, Passed: 1, Failed: 0, Errors: 0, Skipped: 0 fruit-service (fruit-service.feature): Passed", "yaks install", "yaks run fruit-service.feature", "yaks ls", "NAME PHASE TOTAL PASSED FAILED SKIPPED ERRORS fruit-service Passed 3 3 0 0 0", "Given create Camel K integration helloworld.groovy \"\"\" from('timer:tick?period=1000') .setBody().constant('Hello world from Camel K!') .to('log:info') \"\"\" Given Camel K integration helloworld is running Then Camel K integration helloworld should print Hello world from Camel K!", "YAKS_CAMELK_API_VERSION=v1alpha1", "Given create Camel K integration {name}.groovy \"\"\" <<Camel DSL>> \"\"\"", "Given create Camel K integration {name}.groovy with configuration: | dependencies | mvn:org.foo:foo:1.0,mvn:org.bar:bar:0.9 | | traits | quarkus.native=true,quarkus.enabled=true,route.enabled=true | | properties | foo.key=value,bar.key=value | | source | <<Camel DSL>> |", "Given load Camel K integration {name}.groovy", "Given delete Camel K integration {name}", "Given Camel K integration {name} is running", "Given Camel K resource polling configuration | maxAttempts | 10 | | delayBetweenAttempts | 1000 |", "Given Camel K integration {name} should print {log-message}", "Given Camel K resource polling configuration | maxAttempts | 10 | | delayBetweenAttempts | 1000 |", "Given Camel K integration {name} should not print {log-message}", "Given Disable auto removal of Camel K resources", "Given Enable auto removal of Camel K resources", "YAKS_KAMELET_API_VERSION=v1", "Given Kamelet dataType in=\"{mediaType}\"", "Given Kamelet title \"{title}\"", "Given Kamelet template \"\"\" from: uri: timer:tick parameters: period: \"#property:period\" steps: - set-body: constant: \"{{message}}\" - to: \"kamelet:sink\" \"\"\"", "Given Kamelet property definition message | type | string | | required | true | | example | \"hello world\" | | default | \"hello\" |", "Given Kamelet source timer.yaml \"\"\" <<YAML>> \"\"\"", "Given create Kamelet {name}", "Given create Kamelet {name} with template \"\"\" <<YAML>> \"\"\"", "Given load Kamelet {name}.kamelet.yaml", "Given delete Kamelet {name}", "Given Kamelet {name} is availableUSD", "Given bind Kamelet {name} to uri {uri}", "Given bind Kamelet {kamelet} to Kafka topic {topic}", "Given bind Kamelet {kamelet} to Knative channel {channel}", "Given bind Kamelet {kamelet} to Knative channel {channel} of kind {kind}", "Given Pipe source properties | {property} | {value} |", "Given Pipe source properties | message | \"Hello world\" |", "Given Pipe sink properties | {property} | {value} |", "Given create Pipe {name}", "Given load Pipe {name}.yaml", "Given delete Pipe {name}", "Given Pipe {name} is availableUSD", "Given Disable auto removal of Kamelet resources", "Given Enable auto removal of Kamelet resources" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/testing_guide_camel_k/testing-camel-k-locally-and-on-cloud
Chapter 1. Migration Toolkit for Virtualization 2.7
Chapter 1. Migration Toolkit for Virtualization 2.7 You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers: VMware vSphere versions 6, 7, and 8 Red Hat Virtualization (RHV) OpenStack Open Virtual Appliances (OVAs) that were created by VMware vSphere Remote OpenShift Virtualization clusters The release notes describe technical changes, new features and enhancements, known issues, and resolved issues. 1.1. Technical changes Migration Toolkit for Virtualization (MTV) 2.7 has the following technical changes: Upgraded virt-v2v to RHEL9 for warm migrations MTV previously used virt-v2v from Red Hat Enterprise Linux (RHEL) 8, which does not include bug fixes and features that are available in virt-v2v in RHEL9. In MTV 2.7.0, components are updated to RHEL 9 in order to improve the functionality of warm migration. (MTV-1152) 1.2. New features and enhancements Migration Toolkit for Virtualization (MTV) 2.7 introduces the following features and enhancements: In MTV 2.7.0, warm migration is now based on RHEL 9 inheriting features and bug fixes. In MTV 2.7.10, you can specify the plan.spec.diskBus which will be on all disks during the creation. Possible options are: SCSI SATA VirtIO Note Virtual machines (VMs) that are migrated in migration plans by using the diskBus:scsi option fail to boot after migration as follows: All Windows VMs fail to boot. Some Linux VMs fail to boot. VMs that use the plan.spec.diskBus:sata and plan.spec.diskBus:virtio options successfully boot after migration. In MTV 2.7.11, you can now migrate shared disks from VMware vSphere by using a new parameter named migrateSharedDisks in Plan CRs. The parameter can be set to true or false : When set to true, MTV migrates all shared disks in the CR. When set to false , MTV does not migrate the shared disks. Note migrateSharedDisks applies only to cold migrations. Warm migration of shared disks is not supported. 1.3. Resolved issues Migration Toolkit for Virtualization (MTV) 2.7 has the following resolved issues: 1.3.1. Resolved issues 2.7.11 Main controller container of forklift controller crashes during VMware migrations In earlier releases of MTV, the main controller container of forklift-container crashed during VMware migrations because users created snapshots of the virtual machines (VMs) during migrations. As a result, some VMs migrated, but others failed to migrate. This issue has been resolved in MTV 2.7.11, but users are cautioned not to create snapshots of VMs during migrations. (MTV-2164) 1.3.2. Resolved issues 2.7.10 OVN secondary network not working with Multus default network override In earlier releases of MTV, the OVN secondary network was not working with Multus default network override. This issue caused the importer pod to become stuck, as the importer pod was created when there were multiple networks and the wrong default network override annotation was configured. This issue has been resolved in MTV 2.7.10. (MTV-1645) Retain IP field description misleading to customers In earlier releases of MTV, it was difficult to understand the user interface relating to the Preserve IPs VM setting in the chosen setting mode. This issue has been resolved in MTV 2.7.10. (MTV-1743) Persistent TPM is always added to the Windows 2022 VM after conversion In earlier releases of MTV, when migrating a Windows Server 2022 VMware virtual machine (VM) to OpenShift Virtualization, the VMI had the Trusted Platform Module (TPM) module configured, even though the module was not configured in the original VM. This issue has been resolved in MTV 2.7.10. (MTV-1939) 1.3.3. Resolved issues 2.7.9 Forklift controller panic in the inventory In earlier releases of MTV, the forklift-contoller inventory container would panic, and the VM would not be migrated. This issue has been resolved in MTV 2.7.9. (MTV-1610) vCenter sessions keep on increasing during migration plan execution In earlier releases of MTV, when migrating virtual machines (VMs) from VMware vSphere 7 to OpenShift Virtualization, vCenter sessions kept increasing. This issue has been resolved in MTV 2.7.9. (MTV-1929) 1.3.4. Resolved issues 2.7.8 MTV not handling OVA NFS URL if it does not contain : char In earlier releases of MTV, the character : in the NFS URL of OVAs was not handled, which created malformed URLs. This issue has been resolved in MTV 2.7.8. (MTV-1856) Changing the host migration network returns an unexpected error In earlier releases of MTV, when changing the network from VMkernel to Management network , the migration network changed. However, an error message was returned that the request could not be completed due to an incorrect user name or password . This issue has been resolved in MTV 2.7.8. (MTV-1862) Migrating RHEL VMs with LUKS disk fail at DiskTransferV2v stage In earlier releases of MTV, migrating Red Hat Enterprise Linux (RHEL) virtual machines (VMs) with disks encrypted with Linux Unified Key Setup (LUKS) enabled from VMWare fails at the DiskTransferV2v stage. This issue has been resolved in MTV 2.7.8. (MTV-1864) Q35 machine type is hard-coded during import from vSphere In earlier releases of MTV, VMs imported from vSphere always had the Q35 machine type set. However, the machine type is floating , and thus the VM application binary interface (ABI) can change between reboots. This issue has been resolved in MTV 2.7.8. (MTV-1865) memory.requests and memory.guest are set during import from VMware In earlier releases of MTV, MTV imported VMs with the requests.memory field and the memory.guest field set. This was problematic as it prevented memory overcommit, memory hot-plug, and caused unnecessary memory pressure on VMs that could lead to out-of-memory (OOM) errors. This issue has been resolved in MTV 2.7.8. (MTV-1866) virt-v2v failure when converting dual-boot VM In earlier releases of MTV, there was an issue when attempting the migration of a VM with two disks with two different operating systems installed in a dual-boot configuration. This issue has been resolved in MTV 2.7.8. (MTV-1877) 1.3.5. Resolved issues 2.7.7 MTV Controller waiting for snapshot creation In earlier releases of MTV, when performing a warm migration using 200 VMs, the MTV Controller could pause the migration during snapshot creation. This issue has been resolved in MTV 2.7.7. (MTV-1775) Extended delay in time taken for VMs to start migration In earlier releases of MTV, in some situations there was an extended delay before all VMs started to migrate. This issue has been resolved in MTV 2.7.7. (MTV-1774) Warm migration plan with multi-VMs from ESXi host provider fails in cutover phase In earlier releases of MTV, the migration plan with multi-VMs from an ESXi host provider failed in the cutover phase. This issue has been resolved in MTV 2.7.7. (MTV-1753) SecureBoot enabled VM in migrated with SecureBoot disabled In earlier releases of MTV, when migrating a virtual machine (VM) with Secure Boot enabled , the VM had Secure Boot disabled after being migrated. This issue has been resolved in MTV 2.7.7. (MTV-1632) VDDK validator fails to launch in environments with quota set In earlier releases of MTV, after creating a migration plan from a VMware provider, the VDDK validation failed due to a LimitRange not being provided to add requests and limits to any container that do not define them. This issue has been resolved in MTV 2.7.7, with MTV setting limits by default to make the migration plan work out of the box. (MTV-1493) 1.3.6. Resolved issues 2.7.6 Warning if preserve static IP is mapped to pod network In earlier releases of MTV, there was no warning message for preserving static IPs while using Pod networking. This issue has been resolved in MTV 2.7.6. (MTV-1503) Schedule the cutover for an archived plan is not allowed In earlier releases of MTV, the option to schedule a cutover for an archived plan was currently available in the UI. This issue has been resolved in MTV 2.7.6 with the cutover action disabled for an archived plan. (MTV-1729) button misplaced in Create new plan wizard In earlier releases of MTV, when creating a new Migration Plan with the Create new plan wizard, after filling in the form, the button was misplaced to the left of the Back option. This issue has been resolved in MTV 2.7.6. (MTV-1732) Static IP address is not preserved for Debian-based VMs that use interfaces In earlier releases of MTV, all Debian-based operating systems could have the network configurations in the /etc/network/interfaces , but information was not fetched from these config files when creating the udev rule to set the interface name. This issue has been resolved in MTV 2.7.6. (MTV-1711) Editing of plan settings is enabled for all plan statuses In earlier releases of MTV, VMS were removed from archived or archiving plans. This issue has been resolved in MTV 2.7.6, and if a plan's status is either archiving or archived, the option to remove VMs for that plan is blocked. (MTV-1713) Warm migration fails to complete In earlier releases of MTV, after the first Disk Transfer step set was completed, the cutover was set. However, during the Image Conversion step, not all data volumes were completed, with some of them being stuck in the import in progress phase and 100% progress . This issue has been resolved in MTV 2.7.6. (MTV-1717) 1.3.7. Resolved issues 2.7.5 XFS filesystem corruption after warm migration of VM from VMware In earlier releases of MTV, virtual machines (VM) were presenting XFS filesystem and other data corruption after the warm migration from VMware to OpenShift Virtualization using MTV. This issue has been resolved in MTV 2.7.5. (MTV-1679) Missing VM network-ID in the inventory In earlier releases of MTV, after creating a migration plan for VMs with a NSX-T network attached from vSphere, the VM network mapping was missing, and also adding network mapping could not list NSX-T networks as source networks. This issue has been resolved in MTV 2.7.5. (MTV-1695) and (MTV-1140) Failure to create Windows 2019 VM during Cold Migration In earlier releases of MTV, cold migrating Windows Server 2019 VM from Red Hat Virtualization (RHV) to a remote cluster returned a firmware.bootloader setting error of admission webhook "virtualmachine-validator.kubevirt.io" denied the request during the VirtualMachineCreation phase. This issue has been resolved in MTV 2.7.5. (MTV-1613) PreferredEfi applied when BIOS already enabled within VirtualMachineInstanceSpec In earlier releases of MTV, PreferredUseEfi was applied when the BIOS was already enabled within the VirtualMachineInstanceSpec . In MTV 2.7.5, PreferredEfi is only applied when a user has not provided their own EFI configuration and the BIOS is additionally not enabled. (CNV-49381) 1.3.8. Resolved issues 2.7.4 XFS filesystem corruption after warm migration of VM from VMware In earlier releases of MTV, in some cases, the destination VMware virtual machine (VM) was observed to have XFS filesystem corruption after being migrated to OpenShift Virtualization using MTV. This issue has been resolved in MTV 2.7.4. (MTV-1656) Error Did not find CDI importer pod for DataVolume is recorded in the forklift-controller logs during the CopyDisks phase In earlier releases of MTV, the forklift-controller incorrectly logged an error Did not find CDI importer pod for DataVolume during the CopyDisks phase. This issue has been resolved in MTV 2.7.4. (MTV-1627) 1.3.9. Resolved issues 2.7.3 Migration plan does not fail when conversion pod fails In earlier releases of MTV, when running the virt-v2v guest conversion, the migration plan did not fail if the conversion pod failed, as expected. This issue has been resolved in MTV 2.7.3. (MTV-1569) Large number of VMs in the inventory can cause the inventory controller to panic In earlier releases of MTV, having a large number of virtual machines (VMs) in the inventory could cause the inventory controller to panic and return a concurrent write to websocket connection warning. This issue was caused by the concurrent write to the WebSocket connection and has been addressed by the addition of a lock, so the Go routine waits before sending the response from the server. This issue has been resolved in MTV 2.7.3. (MTV-1220) VM selection disappears when selecting multiple VMs in the Migration Plan In earlier releases of MTV, the VM selection checkbox disappeared after selecting multiple VMs in the Migration Plan. This issue has been resolved in MTV 2.7.3. (MTV-1546) forklift-controller crashing during OVA plan migration In earlier releases of MTV, the forklift-controller would crash during an OVA plan migration, returning a runtime error: invalid memory address or nil pointer dereference panic. This issue has been resolved in MTV 2.7.3. (MTV-1577) 1.3.10. Resolved issues 2.7.2 VMNetworksNotMapped error occurs after creating a plan from the UI with the source provider set to OpenShift Virtualization In earlier releases of MTV, after creating a plan with an OpenShift Virtualization source provider, the Migration Plan failed with the error The plan is not ready - VMNetworksNotMapped . This issue has been resolved in MTV 2.7.2. (MTV-1201) Migration Plan for OpenShift Virtualization to OpenShift Virtualization missing the source namespace causing VMNetworkNotMapped error In earlier releases of MTV, when creating a Migration Plan for an OpenShift Virtualization to OpenShift Virtualization migration using the Plan Creation Form, the network map generated was missing the source namespace, which caused a VMNetworkNotMapped error on the plan. This issue has been resolved in MTV 2.7.2. (MTV-1297) DV, PVC, and PV are not cleaned up and removed if the migration plan is Archived and Deleted In earlier releases of MTV, the DataVolume (DV), PersistentVolumeClaim (PVC), and PersistentVolume (PV) continued to exist after the migration plan was archived and deleted. This issue has been resolved in MTV 2.7.2. (MTV-1477) Other migrations are halted from starting as the scheduler is waiting for the complete VM to get transferred In earlier releases of MTV, when warm migrating a virtual machine (VM) that has several disks, you had to wait for the complete VM to get migrated, and the scheduler was halted until all the disks finished before the migration would be started. This issue has been resolved in MTV 2.7.2. (MTV-1537) Warm migration is not functioning as expected In earlier releases of MTV, warm migration did not function as expected. When running the warm migration with VMs larger than the MaxInFlight disks, the VMs over this number did not start the migration until the cutover. This issue has been resolved in MTV 2.7.2. (MTV-1543) Migration hanging due to error: virt-v2v: error: -i libvirt: expecting a libvirt guest name In earlier releases of MTV, when attempting to migrate a VMware VM with a non-compliant Kubernetes name, the Openshift console returned a warning that the VM would be renamed. However, after starting the Migration Plan, it hangs since the migration pod is in an Error state. This issue has been resolved in MTV 2.7.2. This issue has been resolved in MTV 2.7.2. (MTV-1555) VMs are not migrated if they have more disks than MAX_VM_INFLIGHT In earlier releases of MTV, when migrating the VM using the warm migration, if there were more disks than the MAX_VM_INFLIGHT the VM was not scheduled and the migration was not started. This issue has been resolved in MTV 2.7.2. (MTV-1573) Migration Plan returns an error even when Changed Block Tracking (CBT) is enabled In earlier releases of MTV, when running a VM in VMware, if the CBT flag was enabled while the VM was running by adding both ctkEnabled=TRUE and scsi0:0.ctkEnabled=TRUE parameters, an error message Danger alert:The plan is not ready - VMMissingChangedBlockTracking was returned, and the migration plan was prevented from working. This issue has been resolved in MTV 2.7.2. (MTV-1576) 1.3.11. Resolved issues 2.7.0 Change . to - in the names of VMs that are migrated In earlier releases of MTV, if the name of the virtual machines (VMs) contained . , this was changed to - when they were migrated. This issue has been resolved in MTV 2.7.0. (MTV-1292) Status condition indicating a failed mapping resource in a plan is not added to the plan In earlier releases of MTV, a status condition indicating a failed mapping resource of a plan was not added to the plan. This issue has been resolved in MTV 2.7.0, with a status condition indicating the failed mapping being added. (MTV-1461) ifcfg files with HWaddr cause the NIC name to change In earlier releases of MTV, interface configuration (ifcfg) files with a hardware address (HWaddr) of the Ethernet interface caused the name of the network interface controller (NIC) to change. This issue has been resolved in MTV 2.7.0. (MTV-1463) Import fails with special characters in VMX file In earlier releases of MTV, imports failed when there were special characters in the parameters of the VMX file. This issue has been resolved in MTV 2.7.0. (MTV-1472) Observed invalid memory address or nil pointer dereference panic In earlier releases of MTV, an invalid memory address or nil pointer dereference panic was observed, which was caused by a refactor and could be triggered when there was a problem with the inventory pod. This issue has been resolved in MTV 2.7.0. (MTV-1482) Static IPv4 changed after warm migrating win2022/2019 VMs In earlier releases of MTV, the static Internet Protocol version 4 (IPv4) address was changed after a warm migration of Windows Server 2022 and Windows Server 2019 VMs. This issue has been resolved in MTV 2.7.0. (MTV-1491) Warm migration is missing arguments In earlier releases of MTV, virt-v2v-in-place for the warm migration was missing arguments that were available in virt-v2v for the cold migration. This issue has been resolved in MTV 2.7.0. (MTV-1495) Default gateway settings changed after migrating Windows Server 2022 VMs with preserve static IPs In earlier releases of MTV, the default gateway settings were changed after migrating Windows Server 2022 VMs with the preserve static IPs setting. This issue has been resolved in MTV 2.7.0. (MTV-1497) 1.4. Known issues Migration Toolkit for Virtualization (MTV) 2.7 has the following known issues: Select Migration Network from the endpoint type ESXi displays multiple incorrect networks When you choose Select Migration Network , from the endpoint type of ESXi , multiple incorrect networks are displayed. (MTV-1291) VMs with Secure Boot enabled might not be migrated automatically Virtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot, a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM), would prevent the VMs from booting on the destination provider. Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot . (MTV-1548) Windows VMs which are using Measured Boot cannot be migrated Microsoft Windows virtual machines (VMs), which are using the Measured Boot feature, cannot be migrated because Measured Boot is a mechanism to prevent any kind of device changes, by checking each start-up component, including the firmware, all the way to the boot driver. The alternative to migration is to re-create the Windows VM directly on OpenShift Virtualization. Migration of a VM with Secure Boot enabled VM migration results in a VM with Secure Boot disabled When migrating a virtual machine (VM) with Secure Boot enabled, the VM has Secure Boot as disabled after being migrated. This issue has been resolved in MTV 2.7.7. (MTV-1632) OVN secondary network is not functioning as expected with the Multus default network override The secondary network of Open Virtual Network (OVN) does not function as expected with the Multus default network override. (MTV-1645) Network and Storage maps in the UI are not correct when created from the command line When creating Network and Storage maps from the UI, the correct names are not shown in the UI. (MTV-1421) Migration fails with module network-legacy configured in RHEL guests Migration fails if the module configuration file is available in the guest and the dhcp-client package is not installed, returning a dracut module 'network-legacy' will not be installed, because command 'dhclient' could not be found error. (MTV-1615) Harden grub2-mkconfig to avoid overwriting /boot/efi/EFI/redhat/grub.cfg In Red Hat Enterprise Linux (RHEL) 9, there has been a significant change in how GRUB is handled on UEFI systems. Previously, the GRUB configuration file, /boot/efi/EFI/redhat/grub.cfg , was used on UEFI systems. However, in RHEL 9, this file now serves as a stub that dynamically redirects to the GRUB configuration, located at /boot/grub2/grub.cfg . This behavior is expected in RHEL 9.4, but has been resolved in RHEL 9.5. (RHEL-32099) Migration fails for VMs backed by vSAN without VDDK Migrations for virtual machines (VMs) that are backed by VMware vSAN must use a Virtual Disk Development Kit (VDDK) image. Without a VDDK image, these migrations fail. (MTV-2203) Migrated VMs that use the diskBus:scsi option fail to boot VMs that are migrated in migration plans by using the plan.spec.diskBus:scsi option fail to boot after migration as follows: All Windows VMs fail to boot. Some Linux VMs fail to boot. VMs that use the plan.spec.diskBus:sata and plan.spec.diskBus:virtio options successfully boot after migration. (MTV-2199)
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html/release_notes/rn-27_release-notes
Chapter 19. Filtering assets by tags
Chapter 19. Filtering assets by tags You can apply tags in the metadata of each asset and then group assets by tags in the Project Explorer. This feature helps you quickly search through assets of a specific category. Procedure In Business Central, go to Menu Design Projects and click the project name. Open the asset editor by clicking the asset name. In the asset editor window, go to Overview Metadata . In the Tags field, enter the name of your new tag and click Add new tag(s) . You can assign multiple tags to an asset by separating tag names with a space. Figure 19.1. Creating tags The assigned tags are displayed as buttons to the Tags field. Figure 19.2. Tags in metadata view Click the trash icon on the tag button to delete the tag. Figure 19.3. Deleting tags in metadata view Click Save to save your metadata changes. Expand the Project Explorer by clicking on the upper-left corner. Click in the Project Explorer toolbar and select Enable Tag filtering . Figure 19.4. Enable tag filtering This displays a Filter by Tag drop-down menu in the Project Explorer. Figure 19.5. Filter by tag You can sort your assets through this filter to display all assets and service tasks that include the selected metadata tag.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assets_filtering_proc
7.6. Removing Lost Physical Volumes from a Volume Group
7.6. Removing Lost Physical Volumes from a Volume Group If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. It is recommended that you run the vgreduce command with the --test argument to verify what you will be destroying. Like most LVM operations, the vgreduce command is reversible in a sense if you immediately use the vgcfgrestore command to restore the volume group metadata to its state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lost_PV_remove_from_VG
Chapter 2. Creating Enterprise Bean Projects
Chapter 2. Creating Enterprise Bean Projects 2.1. Create a Jakarta Enterprise Beans Archive Project Using Red Hat CodeReady Studio This task describes how to create an Jakarta Enterprise Beans project in Red Hat CodeReady Studio. Prerequisites A server and server runtime for JBoss EAP have been configured in Red Hat CodeReady Studio. Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Create a Jakarta Enterprise Beans Project in Red Hat CodeReady Studio Open the New EJB Project wizard. Navigate to the File menu, select New , then select Project . When the New Project wizard appears, select EJB/EJB Project and click . Figure 2.1. New EJB Project Wizard Enter the following details: Project name: The name of the project that appears in Red Hat CodeReady Studio, and also the default file name for the deployed JAR file. Project location: The directory where the project files will be saved. The default is a directory in the current workspace. Target runtime: This is the server runtime used for the project. This will need to be set to the same JBoss EAP runtime used by the server that you will be deploying to. EJB module version: This is the version of the Jakarta Enterprise Beans specification that your enterprise beans will comply with. Red Hat recommends using 3.2 . Configuration: This allows you to adjust the supported features in your project. Use the default configuration for your selected runtime. Click to continue. The Java project configuration screen allows you to add directories containing Java source files and specify the directory for the output of the build. Leave this configuration unchanged and click . In the EJB Module settings screen, check Generate ejb-jar.xml deployment descriptor if a deployment descriptor is required. The deployment descriptor is optional in Jakarta Enterprise Beans 3.2 and can be added later if required. Click Finish and the project is created and will be displayed in the Project Explorer. Figure 2.2. Newly Created Jakarta Enterprise Beans Project in the Project Explorer To add the project to the server for deployment, right-click on the target server in the Servers tab and choose Add and Remove . In the Add and Remove dialog, select the resource to deploy from the Available column and click the Add button. The resource will be moved to the Configured column. Click Finish to close the dialog. Figure 2.3. Add and Remove Dialog You now have a Jakarta Enterprise Beans project in Red Hat CodeReady Studio that can build and deploy to the specified server. Warning If no enterprise beans are added to the project then Red Hat CodeReady Studio will display the warning stating An EJB module must contain one or more enterprise beans. This warning will disappear once one or more enterprise beans have been added to the project. 2.2. Create a Jakarta Enterprise Beans Archive Project in Maven This task demonstrates how to create a project using Maven that contains one or more enterprise beans packaged in a JAR file. Prerequisites Maven is already installed. You understand the basic usage of Maven. Create a Jakarta Enterprise Beans Archive Project in Maven Create the Maven project: A Jakarta Enterprise Beans project can be created using Maven's archetype system and the ejb-javaee7 archetype. To do this run the mvn command with parameters as shown: Maven will prompt you for the groupId , artifactId , version and package for your project. Add your enterprise beans: Write your enterprise beans and add them to the project under the src/main/java directory in the appropriate sub-directory for the bean's package. Build the project: To build the project, run the mvn package command in the same directory as the pom.xml file. This will compile the Java classes and package the JAR file. The built JAR file is named -.jar and is placed in the target/ directory. You now have a Maven project that builds and packages a JAR file. This project can contain enterprise beans and the JAR file can be deployed to an application server. 2.3. Create an EAR Project Containing a Jakarta Enterprise Beans Project This task describes how to create a new enterprise archive (EAR) project in Red Hat CodeReady Studio that contains a Jakarta Enterprise Beans project. Prerequisites A server and server runtime for JBoss EAP have been set up. Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Create an EAR Project Containing an Jakarta Enterprise Beans Project Open the New Java EE EAR Project Wizard. Navigate to the File menu, select New , then select Project . When the New Project wizard appears, select Java EE/Enterprise Application Project and click . Figure 2.4. New EAR Application Project Wizard Enter the following details: Project name: The name of the project that appears in Red Hat CodeReady Studio, and also the default file name for the deployed EAR file. Project location: The directory where the project files will be saved. The default is a directory in the current workspace. Target runtime: This is the server runtime used for the project. This will need to be set to the same JBoss EAP runtime used by the server that you will be deploying to. EAR version: This is the version of the Jakarta EE 8 specification that your project will comply with. Red Hat recommends using Jakarta EE 8. Configuration: This allows you to adjust the supported features in your project. Use the default configuration for your selected runtime. Click to continue. Add a new Jakarta Enterprise Beans module. New modules can be added from the Enterprise Application page of the wizard. To add a new Jakarta Enterprise Beans Project as a module follow the steps below: Click New Module , uncheck Create Default Modules checkbox, select the Enterprise Java Bean and click . The New EJB Project wizard appears. The New EJB Project wizard is the same as the wizard used to create new standalone Jakarta Enterprise Beans Projects and is described in Create Jakarta Enterprise Beans Archive Project Using Red Hat CodeReady Studio . The minimum details required to create the project are: Project name Target runtime Jakarta Enterprise Beans module version Configuration All the other steps of the wizard are optional. Click Finish to complete creating the Jakarta Enterprise Beans Project. The newly created Jakarta Enterprise Beans project is listed in the Java EE module dependencies and the checkbox is checked. Optionally, add an application.xml deployment descriptor. Check the Generate application.xml deployment descriptor checkbox if one is required. Click Finish . Two new projects will appear: the Jakarta Enterprise Beans project and the EAR project. Add the build artifact to the server for deployment. Open the Add and Remove dialog by right-clicking in the Servers tab on the server you want to deploy the built artifact to in the server tab and then select Add and Remove . Select the EAR resource to deploy from the Available column and click the Add button. The resource will be moved to the Configured column. Click Finish to close the dialog. Figure 2.5. Add and Remove Dialog You now have an Enterprise Application Project with a member Jakarta Enterprise Beans Project. This will build and deploy to the specified server as a single EAR deployment containing a Jakarta Enterprise Beans subdeployment. 2.4. Add a Deployment Descriptor to a Jakarta Enterprise Beans Project A Jakarta Enterprise Beans deployment descriptor can be added to a Jakarta Enterprise Beans project that was created without one. To do this, follow the procedure below. Prerequisites You have a Jakarta Enterprise Beans project in Red Hat CodeReady Studio to which you want to add a Jakarta Enterprise Beans deployment descriptor. Add a Deployment Descriptor to a Jakarta Enterprise Beans Project Open the project in Red Hat CodeReady Studio. Add a deployment descriptor. Right-click on the Deployment Descriptor folder in the project view and select Generate Deployment Descriptor tab. Figure 2.6. Adding a Deployment Descriptor The new file, ejb-jar.xml , is created in ejbModule/META-INF/ . Double-click on the Deployment Descriptor folder in the project view to open this file. 2.5. Runtime deployment information for beans You can add runtime deployment information to your beans for performance monitoring. For details about the available runtime data, see the ejb3 subsystem in the JBoss EAP management model. An application can include the runtime data as annotations in the bean code or in the deployment descriptor. An application can use both options. Additional resources For more information about available runtime data, see the ejb3 subsystem in the JBoss EAP management model . For more information about retrieving runtime data to evaluate performance, see the JBoss EAP Performance Tuning Guide .
[ "mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=ejb-javaee7", "mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=ejb-javaee7 [INFO] Scanning for projects [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] >>> maven-archetype-plugin:2.0:generate (default-cli) @ standalone-pom >>> [INFO] [INFO] <<< maven-archetype-plugin:2.0:generate (default-cli) @ standalone-pom <<< [INFO] [INFO] --- maven-archetype-plugin:2.0:generate (default-cli) @ standalone-pom --- [INFO] Generating project in Interactive mode [INFO] Archetype [org.codehaus.mojo.archetypes:ejb-javaee7:1.5] found in catalog remote Define value for property 'groupId': : com.shinysparkly Define value for property 'artifactId': : payment-arrangements Define value for property 'version': 1.0-SNAPSHOT: : Define value for property 'package': com.shinysparkly: : Confirm properties configuration: groupId: com.company artifactId: payment-arrangements version: 1.0-SNAPSHOT package: com.company.collections Y: : [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 32.440s [INFO] Finished at: Mon Oct 31 10:11:12 EST 2011 [INFO] Final Memory: 7M/81M [INFO] ------------------------------------------------------------------------ [localhost]USD" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_jakarta_enterprise_beans_applications/creating_enterprise_bean_projects
Chapter 1. Architecture overview
Chapter 1. Architecture overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture . 1.1. Glossary of common terms for OpenShift Container Platform architecture This glossary defines common terms that are used in the architecture content. These terms help you understand OpenShift Container Platform architecture effectively. access policies A set of roles that dictate how users, applications, and entities within a cluster interacts with one another. An access policy increases cluster security. admission plugins Admission plugins enforce security policies, resource limitations, or configuration requirements. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. bootstrap A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. certificate signing requests (CSRs) A resource requests a denoted signer to sign a certificate. This request might get approved or denied. Cluster Version Operator (CVO) An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph. compute nodes Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes. configuration drift A situation where the configuration on a node does not match what the machine config specifies. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers anywhere, from a data center to a public or private cloud to your local host. container orchestration engine Software that automates the deployment, management, scaling, and networking of containers. container workloads Applications that are packaged and deployed in containers. control groups (cgroups) Partitions sets of processes into groups to manage and limit the resources processes consume. control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines. CRI-O A Kubernetes native container runtime implementation that integrates with the operating system to deliver an efficient Kubernetes experience. deployment A Kubernetes resource object that maintains the life cycle of an application. Dockerfile A text file that contains the user commands to perform on a terminal to assemble the image. hosted control planes A OpenShift Container Platform feature that enables hosting a control plane on the OpenShift Container Platform cluster from its data plane and workers. This model performs following actions: Optimize infrastructure costs required for the control planes. Improve the cluster creation time. Enable hosting the control plane using the Kubernetes native high level primitives. For example, deployments, stateful sets. Allow a strong network segmentation between the control plane and workloads. hybrid cloud deployments Deployments that deliver a consistent platform across bare metal, virtual, private, and public cloud environments. This offers speed, agility, and portability. Ignition A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. kubernetes manifest Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemon sets. Machine Config Daemon (MCD) A daemon that regularly checks the nodes for configuration drift. Machine Config Operator (MCO) An Operator that applies the new configuration to your cluster machines. machine config pools (MCP) A group of machines, such as control plane components or user workloads, that are based on the resources that they handle. metadata Additional information about cluster deployment artifacts. microservices An approach to writing software. Applications can be separated into the smallest components, independent from each other by using microservices. mirror registry A registry that holds the mirror of OpenShift Container Platform images. monolithic applications Applications that are self-contained, built, and packaged as a single piece. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Update Service (OSUS) For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift Container Platform update service as a hosted service located behind public APIs. OpenShift CLI ( oc ) A command line tool to run OpenShift Container Platform commands on the terminal. OpenShift Dedicated A managed RHEL OpenShift Container Platform offering on Amazon Web Services (AWS) and Google Cloud Platform (GCP). OpenShift Dedicated focuses on building and scaling applications. OpenShift image registry A registry provided by OpenShift Container Platform to manage images. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. OperatorHub A platform that contains various OpenShift Container Platform Operators to install. Operator Lifecycle Manager (OLM) OLM helps you to install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. over-the-air (OTA) updates The OpenShift Container Platform Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. private registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images. public registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images. RHEL OpenShift Container Platform Cluster Manager A managed service where you can install, modify, operate, and upgrade your OpenShift Container Platform clusters. RHEL Quay Container Registry A Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters. replication controllers An asset that indicates how many pod replicas are required to run at a time. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have only access to resources required to execute their roles. route Routes expose a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scaling The increasing or decreasing of resource capacity. service A service exposes a running application on a set of pods. Source-to-Image (S2I) image An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Telemetry A component to collect information such as size, health, and status of OpenShift Container Platform. template A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. user-provisioned infrastructure You can install OpenShift Container Platform on the infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. web console A user interface (UI) to manage OpenShift Container Platform. worker node Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes. Additional resources For more information on networking, see OpenShift Container Platform networking . For more information on storage, see OpenShift Container Platform storage . For more information on authentication, see OpenShift Container Platform authentication . For more information on Operator Lifecycle Manager (OLM), see OLM . For more information on logging, see OpenShift Container Platform Logging . For more information on over-the-air (OTA) updates, see Updating OpenShift Container Platform clusters . 1.2. About installation and updates As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster by using one of the following methods: Installer-provisioned infrastructure User-provisioned infrastructure 1.3. About the control plane The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types. You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services: Perform health checks Provide ways to watch applications Manage over-the-air updates Ensure applications stay in the specified state 1.4. About containerized applications for developers As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example: Use various build-tool, base-image, and registry options to build a simple container application. Use supporting components such as OperatorHub and templates to develop your application. Package and deploy your application as an Operator. You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application. 1.5. About Red Hat Enterprise Linux CoreOS (RHCOS) and Ignition As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks: Learn about the generation of single-purpose container operating system technology . Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS) Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS): Installer-provisioned deployment User-provisioned deployment The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. You can learn how Ignition works , the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation. 1.6. About admission plugins You can use admission plugins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, or configuration requirements.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/architecture/architecture-overview
Red Hat Developer Hub support
Red Hat Developer Hub support If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal . You can use the Red Hat Customer Portal for the following purposes: To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/administration_guide_for_red_hat_developer_hub/snip-customer-support-info_admin-rhdh
Chapter 16. Replacing storage devices
Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc get pv oc delete pv <failed-pv-name>", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} -p FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/<node name> chroot /host", "sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/<node name> chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices
21.2.3. Using TCP
21.2.3. Using TCP The default transport protocol for NFSv4 is TCP; however, the Red Hat Enterprise Linux 4 kernel includes support for NFS over UDP. To use NFS over UDP, include the -o udp option to mount when mounting the NFS-exported file system on the client system. There are three ways to configure an NFS file system export. On demand via the command line (client side), automatically via the /etc/fstab file (client side), and automatically via autofs configuration files, such as /etc/auto.master and /etc/auto.misc (server side with NIS). For example, on demand via the command line (client side): When the NFS mount is specified in /etc/fstab (client side): When the NFS mount is specified in an autofs configuration file for a NIS server, available for NIS enabled workstations: Since the default is TCP, if the -o udp option is not specified, the NFS-exported file system is accessed via TCP. The advantages of using TCP include the following: Improved connection durability, thus less NFS stale file handles messages. Performance gain on heavily loaded networks because TCP acknowledges every packet, unlike UDP which only acknowledges completion. TCP has better congestion control than UDP (which has none). On a very congested network, UDP packets are the first packets that are dropped. This means that if NFS is writing data (in 8K chunks) all of that 8K must be retransmitted over UDP. Because of TCP's reliability, only parts of that 8K data are transmitted at a time. Error detection. When a TCP connection breaks (due to the server being unavailable) the client stops sending data and restarts the connection process once the server becomes available. With UDP, since it's connection-less, the client continues to pound the network with data until the server reestablishes a connection. The main disadvantage is that there is a very small performance hit due to the overhead associated with the TCP protocol.
[ "mount -o udp shadowman.example.com:/misc/export /misc/local", "server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr,udp", "myproject -rw,soft,intr,rsize=8192,wsize=8192,udp penguin.example.net:/proj52" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/mounting_nfs_file_systems-using_tcp
Chapter 22. Managing IdM certificates using Ansible
Chapter 22. Managing IdM certificates using Ansible You can use the ansible-freeipa ipacert module to request, revoke, and retrieve SSL certificates for Identity Management (IdM) users, hosts and services. You can also restore a certificate that has been put on hold. 22.1. Using Ansible to request SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to request SSL certificates for Identity Management (IdM) users, hosts and services. They can then use these certificates to authenticate to IdM. Complete this procedure to request a certificate for an HTTP server from an IdM certificate authority (CA) using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. Procedure Generate a certificate-signing request (CSR) for your user, host or service. For example, to use the openssl utility to generate a CSR for the HTTP service running on client.idm.example.com, enter: As a result, the CSR is stored in new.csr . Create your Ansible playbook file request-certificate.yml with the following content: Replace the certificate request with the CSR from new.csr . Request the certificate: Additional resources The cert module in ansible-freeipa upstream docs 22.2. Using Ansible to revoke SSL certificates for IdM hosts, services and users You can use the ansible-freeipa ipacert module to revoke SSL certificates used by Identity Management (IdM) users, hosts and services to authenticate to IdM. Complete this procedure to revoke a certificate for an HTTP server using an Ansible playbook. The reason for revoking the certificate is "keyCompromise". Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789. Your IdM deployment has an integrated CA. Procedure Create your Ansible playbook file revoke-certificate.yml with the following content: Revoke the certificate: Additional resources The cert module in ansible-freeipa upstream docs Reason Code in RFC 5280 22.3. Using Ansible to restore SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to restore a revoked SSL certificate previously used by an Identity Management (IdM) user, host or a service to authenticate to IdM. Note You can only restore a certificate that was put on hold. You may have put it on hold because, for example, you were not sure if the private key had been lost. However, now you have recovered the key and as you are certain that no-one has accessed it in the meantime, you want to reinstate the certificate. Complete this procedure to use an Ansible playbook to release a certificate for a service enrolled into IdM from hold. This example describes how to release a certificate for an HTTP service from hold. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. Your IdM deployment has an integrated CA. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in path/to/certificate command. In this example, the certificate serial number is 123456789 . Procedure Create your Ansible playbook file restore-certificate.yml with the following content: Run the playbook: Additional resources The cert module in ansible-freeipa upstream docs 22.4. Using Ansible to retrieve SSL certificates for IdM users, hosts, and services You can use the ansible-freeipa ipacert module to retrieve an SSL certificate issued for an Identity Management (IdM) user, host or a service, and store it in a file on the managed node. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You have stored your ipaadmin_password in the secret.yml Ansible vault. You have obtained the serial number of the certificate, for example by entering the openssl x509 -noout -text -in <path_to_certificate> command. In this example, the serial number of the certificate is 123456789, and the file in which you store the retrieved certificate is cert.pem . Procedure Create your Ansible playbook file retrieve-certificate.yml with the following content: Retrieve the certificate: Additional resources The cert module in ansible-freeipa upstream docs
[ "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN=client.idm.example.com,O=IDM.EXAMPLE.COM'", "--- - name: Playbook to request a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Request a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" state: requested csr: | -----BEGIN CERTIFICATE REQUEST----- MIGYMEwCAQAwGTEXMBUGA1UEAwwOZnJlZWlwYSBydWxlcyEwKjAFBgMrZXADIQBs HlqIr4b/XNK+K8QLJKIzfvuNK0buBhLz3LAzY7QDEqAAMAUGAytlcANBAF4oSCbA 5aIPukCidnZJdr491G4LBE+URecYXsPknwYb+V+ONnf5ycZHyaFv+jkUBFGFeDgU SYaXm/gF8cDYjQI= -----END CERTIFICATE REQUEST----- principal: HTTP/client.idm.example.com register: cert", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/request-certificate.yml", "--- - name: Playbook to revoke a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Revoke a certificate for a web server ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 revocation_reason: \"keyCompromise\" state: revoked", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/revoke-certificate.yml", "--- - name: Playbook to restore a certificate hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Restore a certificate for a web service ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 state: released", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/restore-certificate.yml", "--- - name: Playbook to retrieve a certificate and store it locally on the managed node hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Retrieve a certificate and save it to file 'cert.pem' ipacert: ipaadmin_password: \"{{ ipaadmin_password }}\" serial_number: 123456789 certificate_out: cert.pem state: retrieved", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/retrieve-certificate.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-idm-certificates-using-ansible_using-ansible-to-install-and-manage-idm
Chapter 15. Network [config.openshift.io/v1]
Chapter 15. Network [config.openshift.io/v1] Description Network holds cluster-wide information about Network. The canonical name is cluster . It is used to configure the desired network configuration, such as: IP address pools for services/pod IPs, network plugin, etc. Please view network.spec for an explanation on what applies when configuring this resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. status object status holds observed values from the cluster. They may not be overridden. 15.1.1. .spec Description spec holds user settable values for configuration. As a general rule, this SHOULD NOT be read directly. Instead, you should consume the NetworkStatus, as it indicates the currently deployed configuration. Currently, most spec fields are immutable after installation. Please view the individual ones for further details on each. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. This field is immutable after installation. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. externalIP object externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. networkType string NetworkType is the plugin that is to be deployed (e.g. OpenShiftSDN). This should match a value that the cluster-network-operator understands, or else no networking will be installed. Currently supported values are: - OpenShiftSDN This field is immutable after installation. serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. This field is immutable after installation. serviceNodePortRange string The port range allowed for Services of type NodePort. If not specified, the default of 30000-32767 will be used. Such Services without a NodePort specified will have one automatically allocated from this range. This parameter can be updated after the cluster is installed. 15.1.2. .spec.clusterNetwork Description IP address pool to use for pod IPs. This field is immutable after installation. Type array 15.1.3. .spec.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 15.1.4. .spec.externalIP Description externalIP defines configuration for controllers that affect Service.ExternalIP. If nil, then ExternalIP is not allowed to be set. Type object Property Type Description autoAssignCIDRs array (string) autoAssignCIDRs is a list of CIDRs from which to automatically assign Service.ExternalIP. These are assigned when the service is of type LoadBalancer. In general, this is only useful for bare-metal clusters. In Openshift 3.x, this was misleadingly called "IngressIPs". Automatically assigned External IPs are not affected by any ExternalIPPolicy rules. Currently, only one entry may be provided. policy object policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. 15.1.5. .spec.externalIP.policy Description policy is a set of restrictions applied to the ExternalIP field. If nil or empty, then ExternalIP is not allowed to be set. Type object Property Type Description allowedCIDRs array (string) allowedCIDRs is the list of allowed CIDRs. rejectedCIDRs array (string) rejectedCIDRs is the list of disallowed CIDRs. These take precedence over allowedCIDRs. 15.1.6. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description clusterNetwork array IP address pool to use for pod IPs. clusterNetwork[] object ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. clusterNetworkMTU integer ClusterNetworkMTU is the MTU for inter-pod networking. migration object Migration contains the cluster network migration configuration. networkType string NetworkType is the plugin that is deployed (e.g. OpenShiftSDN). serviceNetwork array (string) IP address pool for services. Currently, we only support a single entry here. 15.1.7. .status.clusterNetwork Description IP address pool to use for pod IPs. Type array 15.1.8. .status.clusterNetwork[] Description ClusterNetworkEntry is a contiguous block of IP addresses from which pod IPs are allocated. Type object Property Type Description cidr string The complete block for pod IPs. hostPrefix integer The size (prefix) of block to allocate to each node. If this field is not used by the plugin, it can be left unset. 15.1.9. .status.migration Description Migration contains the cluster network migration configuration. Type object Property Type Description mtu object MTU contains the MTU migration configuration. networkType string NetworkType is the target plugin that is to be deployed. Currently supported values are: OpenShiftSDN, OVNKubernetes 15.1.10. .status.migration.mtu Description MTU contains the MTU migration configuration. Type object Property Type Description machine object Machine contains MTU migration configuration for the machine's uplink. network object Network contains MTU migration configuration for the default network. 15.1.11. .status.migration.mtu.machine Description Machine contains MTU migration configuration for the machine's uplink. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 15.1.12. .status.migration.mtu.network Description Network contains MTU migration configuration for the default network. Type object Property Type Description from integer From is the MTU to migrate from. to integer To is the MTU to migrate to. 15.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/config.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 15.2.1. /apis/config.openshift.io/v1/networks Table 15.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Network Table 15.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 15.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.5. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 15.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.7. Body parameters Parameter Type Description body Network schema Table 15.8. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 15.2.2. /apis/config.openshift.io/v1/networks/{name} Table 15.9. Global path parameters Parameter Type Description name string name of the Network Table 15.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Network Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.12. Body parameters Parameter Type Description body DeleteOptions schema Table 15.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 15.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.15. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 15.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.17. Body parameters Parameter Type Description body Patch schema Table 15.18. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Network schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/network-config-openshift-io-v1
Chapter 1. Overview
Chapter 1. Overview AMQ OpenWire JMS is a Java Message Service (JMS) 1.1 client for use in messaging applications that send and receive OpenWire messages. Important The AMQ OpenWire JMS client is now deprecated in AMQ 7. It is recommended that users of this client migrate to AMQ JMS or AMQ Core Protocol JMS. AMQ OpenWire JMS is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.10 Release Notes . AMQ OpenWire JMS is based on the JMS implementation from Apache ActiveMQ . For more information about the JMS API, see the JMS API reference and the JMS tutorial . 1.1. Key features JMS 1.1 compatible SSL/TLS for secure communication Automatic reconnect and failover Distributed transactions (XA) Pure-Java implementation 1.2. Supported standards and protocols AMQ OpenWire JMS supports the following industry-recognized standards and network protocols: Version 1.1 of the Java Message Service API. Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ 7 Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ OpenWire JMS supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description ConnectionFactory An entry point for creating connections. Connection A channel for communication between two peers on a network. It contains sessions. Session A context for producing and consuming messages. It contains message producers and consumers. MessageProducer A channel for sending messages to a destination. It has a target destination. MessageConsumer A channel for receiving messages from a destination. It has a source destination. Destination A named location for messages, either a queue or a topic. Queue A stored sequence of messages. Topic A stored sequence of messages for multicast distribution. Message An application-specific piece of information. AMQ OpenWire JMS sends and receives messages . Messages are transferred between connected peers using message producers and consumers . Producers and consumers are established over sessions . Sessions are established over connections . Connections are created by connection factories . A sending peer creates a producer to send messages. The producer has a destination that identifies a target queue or topic at the remote peer. A receiving peer creates a consumer to receive messages. Like the producer, the consumer has a destination that identifies a source queue or topic at the remote peer. A destination is either a queue or a topic . In JMS, queues and topics are client-side representations of named broker entities that hold messages. A queue implements point-to-point semantics. Each message is seen by only one consumer, and the message is removed from the queue after it is read. A topic implements publish-subscribe semantics. Each message is seen by multiple consumers, and the message remains available to other consumers after it is read. See the JMS tutorial for more information. 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/overview
Appendix A. Understanding the node_prep_inventory.yml file
Appendix A. Understanding the node_prep_inventory.yml file The node_prep_inventory.yml file is an example Ansible inventory file that you can use to prepare a replacement host for your Red Hat Hyperconverged Infrastructure for Virtualization cluster. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/node_prep_inventory.yml on any hyperconverged host. A.1. Configuration parameters for preparing a replacement node A.1.1. Hosts to configure hc_nodes A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. A.1.2. Multipath devices blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices ( sda , sdb , sdc and sdd ), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. Important Do not list encrypted devices ( luks_* devices) in blacklist_mpath_devices , as they require multipath configuration to work. A.1.3. Deduplication and compression gluster_infra_vdo (optional) Include this section to define a list of devices to use deduplication and compression. These devices require the /dev/mapper/<name> path format when you define them as volume groups in gluster_infra_volume_groups . Each device listed must have the following information: name A short name for the VDO device, for example vdo_sdc . device The device to use, for example, /dev/sdc . logicalsize The logical size of the VDO volume. Set this to ten times the size of the physical disk, for example, if you have a 500 GB disk, set logicalsize: '5000G' . emulate512 If you use devices with a 4 KB block size, set this to on . slabsize If the logical size of the volume is 1000 GB or larger, set this to 32G . If the logical size is smaller than 1000 GB, set this to 2G . blockmapcachesize Set this to 128M . writepolicy Set this to auto . For example: A.1.4. Storage infrastructure gluster_infra_volume_groups (required) This section creates the volume groups that contain the logical volumes. gluster_infra_mount_devices (required) This section creates the logical volumes that form Gluster bricks. gluster_infra_thinpools (optional) This section defines logical thin pools for use by thinly provisioned volumes. Thin pools are not suitable for the engine volume, but can be used for the vmstore and data volume bricks. vgname The name of the volume group that contains this thin pool. thinpoolname A name for the thin pool, for example, gluster_thinpool_sdc . thinpoolsize The sum of the sizes of all logical volumes to be created in this volume group. poolmetadatasize Set to 16G ; this is the recommended size for supported deployments. gluster_infra_cache_vars (optional) This section defines cache logical volumes to improve performance for slow devices. A fast cache device is attached to a thin pool, and requires gluster_infra_thinpool to be defined. vgname The name of a volume group with a slow device that requires a fast external cache. cachedisk The paths of the slow and fast devices, separated with a comma, for example, to use a cache device sde with the slow device sdb , specify /dev/sdb,/dev/sde . cachelvname A name for this cache logical volume. cachethinpoolname The thin pool to which the fast cache volume is attached. cachelvsize The size of the cache logical volume. Around 0.01% of this size is used for cache metadata. cachemode The cache mode. Valid values are writethrough and writeback . gluster_infra_thick_lvs (required) The thickly provisioned logical volumes that are used to create bricks. Bricks for the engine volume must be thickly provisioned. vgname The name of the volume group that contains the logical volume. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_lv_logicalvols (required) The thinly provisioned logical volumes that are used to create bricks. vgname The name of the volume group that contains the logical volume. thinpool The thin pool that contains the logical volume, if this volume is thinly provisioned. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_disktype (required) Specifies the underlying hardware configuration of the disks. Set this to the value that matches your hardware: RAID6 , RAID5 , or JBOD . gluster_infra_diskcount (required) Specifies the number of data disks in the RAID set. For a JBOD disk type, set this to 1 . gluster_infra_stripe_unit_size (required) The stripe size of the RAID set in megabytes. gluster_features_force_varlogsizecheck (required) Set this to true if you want to verify that your /var/log partition has sufficient free space during the deployment process. It is important to have sufficient space for logs, but it is not required to verify space requirements at deployment time if you plan to monitor space requirements carefully. gluster_set_selinux_labels (required) Ensures that volumes can be accessed when SELinux is enabled. Set this to true if SELinux is enabled on this host. A.1.5. Firewall and network infrastructure gluster_infra_fw_ports (required) A list of ports to open between all nodes, in the format <port>/<protocol> . gluster_infra_fw_permanent (required) Ensures the ports listed in gluster_infra_fw_ports are open after nodes are rebooted. Set this to true for production use cases. gluster_infra_fw_state (required) Enables the firewall. Set this to enabled for production use cases. gluster_infra_fw_zone (required) Specifies the firewall zone to which these gluster_infra_fw_\* parameters are applied. gluster_infra_fw_services (required) A list of services to allow through the firewall. Ensure glusterfs is defined here. A.2. Example node_prep_inventory.yml
[ "hc_nodes: hosts: new-host-backend-fqdn.example.com: [configuration specific to this host] vars: [configuration common to all hosts]", "hc_nodes: hosts: new-host-backend-fqdn.example.com: blacklist_mpath_devices: - sdb - sdc", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_vdo: - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', blockmapcachesize: '128M', writepolicy: 'auto' } - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '500G', emulate512: 'off', slabsize: '2G', blockmapcachesize: '128M', writepolicy: 'auto' }", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'}", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: /dev/sdb,/dev/sde cachelvname: cachelv_thinpool_sdb cachethinpoolname: gluster_thinpool_sdb cachelvsize: '250G' cachemode: writethrough", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G", "hc_nodes: hosts: new-host-backend-fqdn.example.com: gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G", "hc_nodes: vars: gluster_infra_disktype: RAID6", "hc_nodes: vars: gluster_infra_diskcount: 10", "hc_nodes: vars: gluster_infra_stripe_unit_size: 256", "hc_nodes: vars: gluster_features_force_varlogsizecheck: false", "hc_nodes: vars: gluster_set_selinux_labels: true", "hc_nodes: vars: gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp", "hc_nodes: vars: gluster_infra_fw_permanent: true", "hc_nodes: vars: gluster_infra_fw_state: enabled", "hc_nodes: vars: gluster_infra_fw_zone: public", "hc_nodes: vars: gluster_infra_fw_services: - glusterfs", "Section for Host Preparation Phase hc_nodes: hosts: # Host - The node which need to be prepared for replacement new-host-backend-fqdn.example.com : # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section gluster_infra_vdo , if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: vdo_sdc , device: /dev/sdc , logicalsize: 5000G , emulate512: off , slabsize: 32G , # blockmapcachesize: 128M , writepolicy: auto } # - { name: vdo_sdd , device: /dev/sdd , logicalsize: 3000G , emulate512: off , slabsize: 32G , # blockmapcachesize: 128M , writepolicy: auto } # When dedupe and compression is enabled on the device, # use pvname for that device as /dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, thinpoolsize is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # poolmetadatasize is 16GB and that should be considered exclusive of # thinpoolsize gluster_infra_thinpools: - {vgname: gluster_vg_sdc , thinpoolname: gluster_thinpool_sdc , thinpoolsize: 500G , poolmetadatasize: 16G } - {vgname: gluster_vg_sdd , thinpoolname: gluster_thinpool_sdd , thinpoolsize: 500G , poolmetadatasize: 16G } # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: 250G # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G # Common configurations vars: # In case of IPv6 based deployment \"gluster_features_enable_ipv6\" needs to be enabled,below line needs to be uncommented, like: # gluster_features_enable_ipv6: true # Firewall setup gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs # Allowed values for gluster_infra_disktype - RAID6, RAID5, JBOD gluster_infra_disktype: RAID6 # gluster_infra_diskcount is the number of data disks in the RAID set. # Note for JBOD its 1 gluster_infra_diskcount: 10 gluster_infra_stripe_unit_size: 256 gluster_features_force_varlogsizecheck: false gluster_set_selinux_labels: true" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-the-node_prep_inventory-yml-file
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/providing-feedback-on-red-hat-documentation_ansible
14.8.6. Resuming a Guest Virtual Machine
14.8.6. Resuming a Guest Virtual Machine Restore a suspended guest virtual machine with virsh using the resume option: This operation is immediate and the guest virtual machine parameters are preserved for suspend and resume operations.
[ "virsh resume {domain-id, domain-name or domain-uuid}" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-Resuming_a_guest_virtual_machine
Chapter 2. Preparing for the Migration
Chapter 2. Preparing for the Migration Before learning about the configuration changes in each feature area, you should ensure that your environment meets the migration requirements and understand how broker instances are configured in AMQ Broker 7. 2.1. Migration Requirements Before migrating to AMQ 7, your environment should meet the following requirements: AMQ 6 requirements You should be running AMQ 6.2.x or later. OpenWire clients should use OpenWire version 10 or later. AMQ 7 requirements You should have a supported operating system and JVM. You can view supported configurations for AMQ 7 at: https://access.redhat.com/articles/2791941 AMQ Broker 7 should be installed. For more information, see Installing AMQ Broker in Getting Started with AMQ Broker . 2.2. Creating a Broker Instance Before migrating to AMQ 7, you should create a AMQ broker instance. You can configure this broker instance as you learn about the configuration differences in AMQ 7 that are described in this guide. When you installed AMQ Broker, the binaries, libraries, and other important files needed to run AMQ Broker were installed. However, in AMQ 7, you must explicitly create a broker instance whenever a new broker is needed. Each broker instance is a separate directory containing its own configuration and runtime data. Note Keeping broker installation and configuration separate means that you can install AMQ Broker just once in a central location and then create as many broker instances as you require. Additionally, keeping installation and configuration separate makes it easier to manage and upgrade your brokers as needed. Prerequisites AMQ Broker 7 must be installed. Procedure Navigate to the location where you want to create the broker instance. Do one of the following to create the broker instance: If... Then... AMQ Broker 7 is installed on the same machine as AMQ 6 Use the artemis create command with the --port-offset parameter to create the new broker instance that will not conflict with your existing AMQ 6 broker. Note AMQ Broker 7 and AMQ 6 both listen for client traffic on the same set of default ports. Therefore, you must offset the default ports on the AMQ Broker broker instance to avoid potential conflicts. This example creates a new broker instance that listens for client traffic on different ports than the AMQ 6 broker: AMQ Broker 7 and AMQ 6 are installed on separate machines Use the artemis create command to create the new broker instance. This example creates a new broker instance and prompts you for any required values: Creating ActiveMQ Artemis instance at: /var/lib/amq7/mybroker --user: is mandatory with this configuration: Please provide the default username: user --password: is mandatory with this configuration: Please provide the default password: password --role: is mandatory with this configuration: Please provide the default role: amq --allow-anonymous Related Information For full details on creating broker instances, see Creating a broker instance in Getting Started with AMQ Broker . 2.3. Understanding the Broker Instance Directory Structure Each AMQ 7 broker instance contains its own directory. You should understand the directory content and where to find the configuration files for the broker instance you created. When you create a broker instance, the following directory structure is created: BROKER_INSTANCE_DIR The location where the broker instance was created. This is a different location than the AMQ Broker installation. /bin Shell scripts for starting and stopping the broker instance. /data Contains broker state data, such as the message store. /etc The broker instance's configuration files. These are the files you need to access to configure the broker instance. /lock Contains the cli.lock file. /log Log files for the broker instance. /tmp A utility directory for temporary files. 2.4. How Brokers are Configured in AMQ 7 You should understand how the broker instance you created should be configured and which configuration files you will need to edit. Like AMQ 6, you configure AMQ 7 broker instances by editing plain text and XML files. Changing a broker's configuration involves opening the appropriate configuration file in the broker instance's directory, locating the proper element in the XML hierarchy, and then making the actual change- which typically involves adding or removing XML elements and attributes. Within BROKER_INSTANCE_DIR /etc , there are several configuration files that you can edit: Configuration File Description broker.xml The main configuration file. Similar to activemq.xml in AMQ 6, you use this file to configure most aspects of the broker, such as acceptors for incoming network connections, security settings, message addresses, and so on. bootstrap.xml The file that AMQ Broker uses to start the broker instance. You use it to change the location of the main broker configuration file, configure the web server, and set some security settings. logging.properties You use this file to set logging properties for the broker instance. This file is similar to the org.ops4j.pax.logging.cfg file in AMQ 6. JAAS configuration files ( login.config , users.properties , roles.properties ) You use these files to set up authentication for user access to the broker instance. Migrating to AMQ 7 primarily involves editing the broker.xml file. For more information about the broker.xml structure and default configuration settings, see Understanding the default broker configuration in Configuring AMQ Broker . 2.5. Verifying that Clients Can Connect to the Broker Instance To verify that your existing clients can connect to the broker instance you created, you should start the broker instance and send some test messages. Procedure Start the broker instance by using one of the following commands: To... Use this command... Start the broker in the foreground Start the broker as a service The broker instance starts. By default, an OpenWire connector is started on the broker instance on the same port as your AMQ 6 broker. This should enable your existing clients to connect to the broker instance. If you want to check the status of the broker instance, open the BROKER_INSTANCE_DIR /logs/artemis.log file. In your AMQ 6 broker, use the producer command to send some test messages to the AMQ 7 broker instance. This command sends five test messages to a AMQ 7 broker instance hosted on localhost and listening on the default acceptor: JBossA-MQ:karaf@root> producer --brokerUrl tcp://0.0.0.0:61616 --message "Test message" --messageCount 5 If you offset the port numbers when you created the broker instance (using --port-offset ), make sure that you use the correct port number for the broker URL. For example, if you set the port offset to 100, then you would need to set --brokerUrl to tcp://0.0.0.0:61716 . In your AMQ 6 broker, use the consumer command to verify that you can consume the test messages that you sent to the AMQ 7 broker instance. This command receives the five test messages sent to the AMQ 7 broker instance: JBossA-MQ:karaf@root> consumer --brokerUrl tcp://0.0.0.0:61616 You can also verify that the messages were sent and received by checking the INSTALL_DIR /data/log/amq.log file on the AMQ 6 broker. Stop the broker instance:
[ "sudo mkdir /var/lib/amq7 cd /var/lib/amq7", "sudo INSTALL_DIR /bin/artemis create mybroker --port-offset 100 --user admin --password pass --role amq --allow-anonymous true", "sudo INSTALL_DIR /bin/artemis create mybroker", "ls /var/lib/amq7/mybroker bin data etc lock log tmp", "sudo BROKER_INSTANCE_DIR /bin/artemis run", "sudo BROKER_INSTANCE_DIR /bin/artemis-service start", "JBossA-MQ:karaf@root> producer --brokerUrl tcp://0.0.0.0:61616 --message \"Test message\" --messageCount 5", "JBossA-MQ:karaf@root> consumer --brokerUrl tcp://0.0.0.0:61616", "BROKER_INSTANCE_DIR /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/preparing_for_the_migration
Chapter 1. Introduction to hybrid committed spend
Chapter 1. Introduction to hybrid committed spend This document provides an overview and instructions to begin using the hybrid committed spend service, including prerequisites and instructions for connecting your cloud environments. To access and use hybrid committed spend, your organization must have a hyperscaler drawdown agreement with Red Hat. 1.1. Understanding hybrid committed spend Hybrid committed spend is a service in the Red Hat toolchain that enables your organization to automate the tracking of hyperscaler drawdown spending with select hyperscalers. Hybrid committed spend automatically aggregates and processes data from the required integrations reducing the need for manual reporting. The hybrid committed spend service utilizes Red Hat's cloud spend integration toolchain to gather and process the data from Red Hat and cloud provider integrations after initial configuration. 1.2. Understanding hybrid committed spend data usage and security When you configure hybrid committed spend, the Red Hat cloud spend integration toolchain processes all of your organization's hyperscaler spend for the configured account. The data is then sent to a secure system that is part of Red Hat's financial data processing servers. It will be used to calculate hyperscaler drawdown against your commitment. Data is only collected for the purpose of calculating hyperscaler drawdown and spend tracking. It is not shared internally or externally. Red Hat employs technical and organizational measures designed to protect your data. No development engineer has access to view the customer data directly. Outside of the cloud integration toolchain, data analysis only occurs for the purpose of debugging. 1.3. Accessing the hybrid committed spend service You can access the hybrid committed spend service from Red Hat Hybrid Cloud Console . To access and use hybrid committed spend, your organization must sign a hybrid committed spend agreement with Red Hat. This agreement enables you to share data with Red Hat to calculate drawdown. If you are unable to access hybrid committed spend from Red Hat Hybrid Cloud Console , you must first identify if your organization signed a hybrid committed spend contract. Your account must also have HCS viewer permissions. Contact your Red Hat sales or support representative for more information. Procedure Navigate to Red Hat Hybrid Cloud Console . Click the Services menu. From the left navigation menu, click Spend Management . Click the Hybrid Committed Spend card. 1.4. Configuring hybrid committed spend integrations An integration is a provider account that is connected to hybrid committed spend to be monitored for drawdown. To use hybrid committed spend to monitor your hyperscaler drawdown, you must first connect a data integration to hybrid committed spend. Once an integration is connected to hybrid committed spend it will automatically send your cost and usage data to Red Hat. You can enable an integration using unfiltered cloud provider cost data for a more detailed overview, or limit your account to a minimal amount of filtered data required to see Red Hat related spend for drawdown. Currently, the cloud spend integration toolchain can track drawdown for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. From the Integrations page , you can view, edit, and delete integrations connected to hybrid committed spend. For more information about how to add your cloud provider to hybrid committed spend, follow these guides: Integrating Amazon Web Services (AWS) data into hybrid committed spend Integrating Google Cloud data into hybrid committed spend Integrating Microsoft Azure data into hybrid committed spend
null
https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/getting_started_with_hybrid_committed_spend/assembly-introduction-to-hcs
11.8. Storage Tasks
11.8. Storage Tasks 11.8.1. Uploading Images to a Data Storage Domain You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API. Note To upload images with the REST API, see IMAGETRANSFERS and IMAGETRANSFER in the REST API Guide . QEMU-compatible virtual disks can be attached to virtual machines. Virtual disk types must be either QCOW2 or raw. Disks created from a QCOW2 virtual disk cannot be shareable, and the QCOW2 virtual disk file must not have a backing file. ISO images can be attached to virtual machines as CDROMs or used to boot virtual machines. Prerequisites The upload function uses HTML 5 APIs, which requires your environment to have the following: Image I/O Proxy ( ovirt-imageio-proxy ), configured with engine-setup . See Configuring the Red Hat Virtualization Manager for details. Certificate authority, imported into the web browser used to access the Administration Portal. To import the certificate authority, browse to https:// engine_address /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA and enable all the trust settings. Refer to the instructions to install the certificate authority in Firefox , Internet Explorer , or Google Chrome . Browser that supports HTML 5, such as Firefox 35, Internet Explorer 10, Chrome 13, or later. Uploading an Image to a Data Storage Domain Click Storage Disks . Select Start from the Upload menu. Click Choose File and select the image to upload. Fill in the Disk Options fields. See Section 13.6.2, "Explanation of Settings in the New Virtual Disk Window" for descriptions of the relevant fields. Click OK . A progress bar indicates the status of the upload. You can pause, cancel, or resume uploads from the Upload menu. Increasing the Upload Timeout Value If the upload times out and you see the message, Reason: timeout due to transfer inactivity , increase the timeout value: Restart the ovirt-engine service: 11.8.2. Moving Storage Domains to Maintenance Mode A storage domain must be in maintenance mode before it can be detached and removed. This is required to redesignate another data domain as the master data domain. Important You cannot move a storage domain into maintenance mode if a virtual machine has a lease on the storage domain. The virtual machine needs to be shut down, or the lease needs to be to removed or moved to a different storage domain first. See the Virtual Machine Management Guide for information about virtual machine leases. Expanding iSCSI domains by adding more LUNs can only be done when the domain is active. Moving storage domains to maintenance mode Shut down all the virtual machines running on the storage domain. Click Storage Domains . Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance . Note The Ignore OVF update failure check box allows the storage domain to go into maintenance mode even if the OVF update fails. Click OK . The storage domain is deactivated and has an Inactive status in the results list. You can now edit, detach, remove, or reactivate the inactive storage domains from the data center. Note You can also activate, detach, and place domains into maintenance mode using the Storage tab in the details view of the data center it is associated with. 11.8.3. Editing Storage Domains You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center , Domain Function , Storage Type , and Format cannot be changed. Active : When the storage domain is in an active state, the Name , Description , Comment , Warning Low Space Indicator (%) , Critical Space Action Blocker (GB) , Wipe After Delete , and Discard After Delete fields can be edited. The Name field can only be edited while the storage domain is active. All other fields can also be edited while the storage domain is inactive. Inactive : When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit all fields except Name , Data Center , Domain Function , Storage Type , and Format . The storage domain must be inactive to edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types. Note iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating Storage Connections in the REST API Guide . Editing an Active Storage Domain Click Storage Domains and select a storage domain. Click Manage Domain . Edit the available fields as required. Click OK . Editing an Inactive Storage Domain Click Storage Domains . If the storage domain is active, move it to maintenance mode: Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance . Click OK . Click Manage Domain . Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection. Click OK . Activate the storage domain: Click the storage domain's name to open the details view. Click the Data Center tab. Click Activate . 11.8.4. Updating OVFs By default, OVFs are updated every 60 minutes. However, if you have imported an important virtual machine or made a critical update, you can update OVFs manually. Updating OVFs Click Storage Domains . Select the storage domain and click More Actions ( ), then click Update OVFs . The OVFs are updated and a message appears in Events . 11.8.5. Activating Storage Domains from Maintenance Mode If you have been making changes to a data center's storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it. Click Storage Domains . Click an inactive storage domain's name to open the details view. Click the Data Centers tab. Click Activate . Important If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated. 11.8.6. Detaching a Storage Domain from a Data Center Detach a storage domain from one data center to migrate it to another data center. Detaching a Storage Domain from the Data Center Click Storage Domains . Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance . Click OK to initiate maintenance mode. Click Detach . Click OK to detach the storage domain. The storage domain has been detached from the data center, ready to be attached to another data center. 11.8.7. Attaching a Storage Domain to a Data Center Attach a storage domain to a data center. Attaching a Storage Domain to a Data Center Click Storage Domains . Click the storage domain's name to open the details view. Click the Data Center tab. Click Attach . Select the appropriate data center. Click OK . The storage domain is attached to the data center and is automatically activated. 11.8.8. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name to open the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 11.8.9. Destroying a Storage Domain A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain forcibly removes the storage domain from the virtualized environment. Destroying a Storage Domain Click Storage Domains . Select the storage domain and click More Actions ( ), then click Destroy . Select the Approve operation check box. Click OK . 11.8.10. Creating a Disk Profile Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect. This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs. Creating a Disk Profile Click Storage Domains . Click the data storage domain's name to open the details view. Click the Disk Profiles tab. Click New . Enter a Name and a Description for the disk profile. Select the quality of service to apply to the disk profile from the QoS list. Click OK . 11.8.11. Removing a Disk Profile Remove an existing disk profile from your Red Hat Virtualization environment. Removing a Disk Profile Click Storage Domains . Click the data storage domain's name to open the details view. Click the Disk Profiles tab. Select the disk profile to remove. Click Remove . Click OK . If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks. 11.8.12. Viewing the Health Status of a Storage Domain Storage domains have an external health status in addition to their regular Status . The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the storage domain's Name as one of the following icons: OK : No icon Info : Warning : Error : Failure : To view further details about the storage domain's health status, click the storage domain's name to open the details view, and click the Events tab. The storage domain's health status can also be viewed using the REST API. A GET request on a storage domain will include the external_status element, which contains the health status. You can set a storage domain's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide . 11.8.13. Setting Discard After Delete for a Storage Domain When the Discard After Delete check box is selected, a blkdiscard command is called on a logical volume when it is removed and the underlying storage is notified that the blocks are free. The storage array can use the freed space and allocate it when requested. Discard After Delete only works on block storage. The flag is not available on the Red Hat Virtualization Manager for file storage, for example NFS. Restrictions: Discard After Delete is only available on block storage domains, such as iSCSI or Fibre Channel. The underlying storage must support Discard . Discard After Delete can be enabled both when creating a block storage domain or when editing a block storage domain. See Preparing and Adding Block Storage and Editing Storage Domains . 11.8.14. Enabling 4K support on environments with more than 250 hosts By default, GlusterFS domains and local storage domains support 4K block size on Red Hat Virtualization environments with up to 250 hosts. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. The lockspace area that Sanlock allocates is 1 MB when the maximum number of hosts is the default 250. When you increase the maximum number of hosts when using 4K storage, the lockspace area is larger. For example, when using 2000 hosts, the lockspace area could be as large as 8 MB. You can enable 4K block support on environments with more than 250 hosts by setting the engine configuration parameter MaxNumberOfHostsInStoragePool . Procedure On the Manager machine enable the required maximum number of hosts: Restart the JBoss Application Server: For example, if you have a cluster with 300 hosts, enter: Verification View the value of the MaxNumberOfHostsInStoragePool parameter on the Manager: 11.8.15. Disabling 4K support By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. You can disable 4K block support. Procedure Ensure that 4K block support is enabled. Edit /etc/vdsm/vdsm.conf.d/gluster.conf and set enable_4k_storage to false . For example:
[ "engine-config -s TransferImageClientInactivityTimeoutInSeconds=6000", "systemctl restart ovirt-engine", "engine-config -s MaxNumberOfHostsInStoragePool= NUMBER_OF_HOSTS", "service jboss-as restart", "engine-config -s MaxNumberOfHostsInStoragePool=300 service jboss-as restart", "engine-config --get=MaxNumberOfHostsInStoragePool MaxNumberOfHostsInStoragePool: 250 version: general", "vdsm-client Host getCapabilities ... { \"GLUSTERFS\" : [ 0, 512, 4096, ] ...", "vi /etc/vdsm/vdsm.conf.d/gluster.conf Use to disable 4k support if needed. enable_4k_storage = false" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-storage_tasks
Chapter 4. Installing a cluster with customizations
Chapter 4. Installing a cluster with customizations Use the following procedures to install an OpenShift Container Platform cluster with customizations using the Agent-based Installer. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall or proxy, you configured it to allow the sites that your cluster requires access to. 4.2. Installing OpenShift Container Platform with the Agent-based Installer The following procedures deploy a single-node OpenShift Container Platform in a disconnected environment. You can use these procedures as a basis and modify according to your requirements. 4.2.1. Downloading the Agent-based Installer Use this procedure to download the Agent-based Installer and the CLI needed for your installation. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Navigate to Datacenter . Click Run Agent-based Installer locally . Select the operating system and architecture for the OpenShift Installer and Command line interface . Click Download Installer to download and extract the install program. Download or copy the pull secret by clicking on Download pull secret or Copy pull secret . Click Download command-line tools and place the openshift-install binary in a directory that is on your PATH . 4.2.2. Verifying the supported architecture for an Agent-based installation Before installing an OpenShift Container Platform cluster using the Agent-based Installer, you can verify the supported architecture on which you can install the cluster. This procedure is optional. Prerequisites You installed the OpenShift CLI ( oc ). You have downloaded the installation program. Procedure Log in to the OpenShift CLI ( oc ). Check your release payload by running the following command: USD ./openshift-install version Example output ./openshift-install 4.16.0 built from commit abc123def456 release image quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0 release architecture amd64 If you are using the release image with the multi payload, the release architecture displayed in the output of this command is the default architecture. To check the architecture of the payload, run the following command: USD oc adm release info <release_image> -o jsonpath="{ .metadata.metadata}" 1 1 Replace <release_image> with the release image. For example: quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0 . .Example output when the release image uses the multi payload {"release.openshift.io architecture":"multi"} If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. 4.2.3. Creating the preferred configuration inputs Use this procedure to create the preferred configuration inputs used to create the agent image. Procedure Install nmstate dependency by running the following command: USD sudo dnf install /usr/bin/nmstatectl -y Place the openshift-install binary in a directory that is on your PATH. Create a directory to store the install configuration by running the following command: USD mkdir ~/<directory_name> Note This is the preferred method for the Agent-based installation. Using GitOps ZTP manifests is optional. Create the install-config.yaml file by running the following command: USD cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF 1 Specify the system architecture. Valid values are amd64 , arm64 , ppc64le , and s390x . If you are using the release image with the multi payload, you can install the cluster on different architectures such as arm64 , amd64 , s390x , and ppc64le . Otherwise, you can install the cluster only on the release architecture displayed in the output of the openshift-install version command. For more information, see "Verifying the supported architecture for installing an Agent-based Installer cluster". 2 Required. Specify your cluster name. 3 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 4 Specify your platform. Note For bare metal platforms, host settings made in the platform section of the install-config.yaml file are used by default, unless they are overridden by configurations made in the agent-config.yaml file. 5 Specify your pull secret. 6 Specify your SSH public key. Note If you set the platform to vSphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 Note When you use a disconnected mirror registry, you must add the certificate file that you created previously for your mirror registry to the additionalTrustBundle field of the install-config.yaml file. Create the agent-config.yaml file by running the following command: USD cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.2 -hop-interface: eno1 table-id: 254 EOF 1 This IP address is used to determine which node performs the bootstrapping process as well as running the assisted-service component. You must provide the rendezvous IP address when you do not specify at least one host's IP address in the networkConfig parameter. If this address is not provided, one IP address is selected from the provided hosts' networkConfig . 2 Optional: Host configuration. The number of hosts defined must not exceed the total number of hosts defined in the install-config.yaml file, which is the sum of the values of the compute.replicas and controlPlane.replicas parameters. 3 Optional: Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods. 4 Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installation program examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. 5 Optional: Configures the network interface of a host in NMState format. Additional resources Configuring regions and zones for a VMware vCenter Verifying the supported architecture for installing an Agent-based installer cluster Configuring the Agent-based Installer to use mirrored images 4.2.4. Creating additional manifest files As an optional task, you can create additional manifests to further configure your cluster beyond the configurations available in the install-config.yaml and agent-config.yaml files. Important Customizations to the cluster made by additional manifests are not validated, are not guaranteed to work, and might result in a nonfunctional cluster. 4.2.4.1. Creating a directory to contain additional manifests If you create additional manifests to configure your Agent-based installation beyond the install-config.yaml and agent-config.yaml files, you must create an openshift subdirectory within your installation directory. All of your additional machine configurations must be located within this subdirectory. Note The most common type of additional manifest you can add is a MachineConfig object. For examples of MachineConfig objects you can add during the Agent-based installation, see "Using MachineConfig objects to configure nodes" in the "Additional resources" section. Procedure On your installation host, create an openshift subdirectory within the installation directory by running the following command: USD mkdir <installation_directory>/openshift Additional resources Using MachineConfig objects to configure nodes 4.2.4.2. Disk partitioning In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Prerequisites You have created an openshift subdirectory within your installation directory. Procedure Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml 4.2.5. Using ZTP manifests As an optional task, you can use GitOps Zero Touch Provisioning (ZTP) manifests to configure your installation beyond the options available through the install-config.yaml and agent-config.yaml files. Note GitOps ZTP manifests can be generated with or without configuring the install-config.yaml and agent-config.yaml files beforehand. If you chose to configure the install-config.yaml and agent-config.yaml files, the configurations will be imported to the ZTP cluster manifests when they are generated. Prerequisites You have placed the openshift-install binary in a directory that is on your PATH . Optional: You have created and configured the install-config.yaml and agent-config.yaml files. Procedure Generate ZTP cluster manifests by running the following command: USD openshift-install agent create cluster-manifests --dir <installation_directory> Important If you have created the install-config.yaml and agent-config.yaml files, those files are deleted and replaced by the cluster manifests generated through this command. Any configurations made to the install-config.yaml and agent-config.yaml files are imported to the ZTP cluster manifests when you run the openshift-install agent create cluster-manifests command. Navigate to the cluster-manifests directory by running the following command: USD cd <installation_directory>/cluster-manifests Configure the manifest files in the cluster-manifests directory. For sample files, see the "Sample GitOps ZTP custom resources" section. Disconnected clusters: If you did not define mirror configuration in the install-config.yaml file before generating the ZTP manifests, perform the following steps: Navigate to the mirror directory by running the following command: USD cd ../mirror Configure the manifest files in the mirror directory. Additional resources Sample GitOps ZTP custom resources . See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning (ZTP). 4.2.6. Encrypting the disk As an optional task, you can use this procedure to encrypt your disk or partition while installing OpenShift Container Platform with the Agent-based Installer. Prerequisites You have created and configured the install-config.yaml and agent-config.yaml files, unless you are using ZTP manifests. You have placed the openshift-install binary in a directory that is on your PATH . Procedure Generate ZTP cluster manifests by running the following command: USD openshift-install agent create cluster-manifests --dir <installation_directory> Important If you have created the install-config.yaml and agent-config.yaml files, those files are deleted and replaced by the cluster manifests generated through this command. Any configurations made to the install-config.yaml and agent-config.yaml files are imported to the ZTP cluster manifests when you run the openshift-install agent create cluster-manifests command. Note If you have already generated ZTP manifests, skip this step. Navigate to the cluster-manifests directory by running the following command: USD cd <installation_directory>/cluster-manifests Add the following section to the agent-cluster-install.yaml file: diskEncryption: enableOn: all 1 mode: tang 2 tangServers: "server1": "http://tang-server-1.example.com:7500" 3 1 Specify which nodes to enable disk encryption on. Valid values are none , all , master , and worker . 2 Specify which disk encryption mode to use. Valid values are tpmv2 and tang . 3 Optional: If you are using Tang, specify the Tang servers. Additional resources About disk encryption 4.2.7. Creating and booting the agent image Use this procedure to boot the agent image on your machines. Procedure Create the agent image by running the following command: USD openshift-install --dir <install_directory> agent create image Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with a default /etc/multipath.conf configuration. Boot the agent.x86_64.iso , agent.aarch64.iso , or agent.s390x.iso image on the bare metal machines. 4.2.8. Adding IBM Z agents with RHEL KVM Use the following procedure to manually add IBM Z(R) agents with RHEL KVM. Only use this procedure for IBM Z(R) clusters with RHEL KVM. Procedure Boot your RHEL KVM machine. To deploy the virtual server, run the virt-install command with the following parameters: USD virt-install --name <vm_name> \ --autostart \ --memory=<memory> \ --cpu host \ --vcpus=<vcpus> \ --cdrom <agent_iso_image> \ 1 --disk pool=default,size=<disk_pool_size> \ --network network:default,mac=<mac_address> \ --graphics none \ --noautoconsole \ --os-variant rhel9.0 \ --wait=-1 1 For the --cdrom parameter, specify the location of the ISO image on the HTTP or HTTPS server. 4.2.9. Verifying that the current installation host can pull release images After you boot the agent image and network services are made available to the host, the agent console application performs a pull check to verify that the current host can retrieve release images. If the primary pull check passes, you can quit the application to continue with the installation. If the pull check fails, the application performs additional checks, as seen in the Additional checks section of the TUI, to help you troubleshoot the problem. A failure for any of the additional checks is not necessarily critical as long as the primary pull check succeeds. If there are host network configuration issues that might cause an installation to fail, you can use the console application to make adjustments to your network configurations. Important If the agent console application detects host network configuration issues, the installation workflow will be halted until the user manually stops the console application and signals the intention to proceed. Procedure Wait for the agent console application to check whether or not the configured release image can be pulled from a registry. If the agent console application states that the installer connectivity checks have passed, wait for the prompt to time out to continue with the installation. Note You can still choose to view or change network configuration settings even if the connectivity checks have passed. However, if you choose to interact with the agent console application rather than letting it time out, you must manually quit the TUI to proceed with the installation. If the agent console application checks have failed, which is indicated by a red icon beside the Release image URL pull check, use the following steps to reconfigure the host's network settings: Read the Check Errors section of the TUI. This section displays error messages specific to the failed checks. Select Configure network to launch the NetworkManager TUI. Select Edit a connection and select the connection you want to reconfigure. Edit the configuration and select OK to save your changes. Select Back to return to the main screen of the NetworkManager TUI. Select Activate a Connection . Select the reconfigured network to deactivate it. Select the reconfigured network again to reactivate it. Select Back and then select Quit to return to the agent console application. Wait at least five seconds for the continuous network checks to restart using the new network configuration. If the Release image URL pull check succeeds and displays a green icon beside the URL, select Quit to exit the agent console application and continue with the installation. 4.2.10. Tracking and verifying installation progress Use the following procedure to track installation progress and to verify a successful installation. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Optional: To know when the bootstrap host (rendezvous host) reboots, run the following command: USD ./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <install_directory> , specify the path to the directory where the agent ISO was generated. 2 To view different installation details, specify warn , debug , or error instead of info . Example output ................................................................... ................................................................... INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. To track the progress and verify successful installation, run the following command: USD openshift-install --dir <install_directory> agent wait-for install-complete 1 1 For <install_directory> directory, specify the path to the directory where the agent ISO was generated. Example output ................................................................... ................................................................... INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com Note If you are using the optional method of GitOps ZTP manifests, you can configure IP address endpoints for cluster nodes through the AgentClusterInstall.yaml file in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) IPv6 is supported only on bare metal platforms. Example of dual-stack networking apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.16 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes Additional resources See Deploying with dual-stack networking . See Configuring the install-config yaml file . See Configuring a three-node cluster to deploy three-node clusters in bare metal environments. See About root device hints . See NMState state examples . 4.3. Sample GitOps ZTP custom resources You can optionally use GitOps Zero Touch Provisioning (ZTP) custom resource (CR) objects to install an OpenShift Container Platform cluster with the Agent-based Installer. You can customize the following GitOps ZTP custom resources to specify more details about your OpenShift Container Platform cluster. The following sample GitOps ZTP custom resources are for a single-node cluster. Example agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.16 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key> Example cluster-deployment.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret Example cluster-image-set.yaml file apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.16 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.16.0-0.nightly-2022-06-06-025509 Example infra-env.yaml file apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value Example nmstateconfig.yaml file apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 -hop-interface: eth0 table-id: 254 interfaces: - name: "eth0" macAddress: 52:54:01:aa:aa:a1 Example pull-secret.yaml file apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret> Additional resources See Challenges of the network far edge to learn more about GitOps Zero Touch Provisioning (ZTP). 4.4. Gathering log data from a failed Agent-based installation Use the following procedure to gather log data about a failed Agent-based installation to provide for a support case. Prerequisites You have configured a DNS record for the Kubernetes API server. Procedure Run the following command and collect the output: USD ./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug Example error message ... ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded If the output from the command indicates a failure, or if the bootstrap is not progressing, run the following command to connect to the rendezvous host and collect the output: USD ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz Note Red Hat Support can diagnose most issues using the data gathered from the rendezvous host, but if some hosts are not able to register, gathering this data from every host might be helpful. If the bootstrap completes and the cluster nodes reboot, run the following command and collect the output: USD ./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug If the output from the command indicates a failure, perform the following steps: Export the kubeconfig file to your environment by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Gather information for debugging by running the following command: USD oc adm must-gather Create a compressed file from the must-gather directory that was just created in your working directory by running the following command: USD tar cvaf must-gather.tar.gz <must_gather_directory> Excluding the /auth subdirectory, attach the installation directory used during the deployment to your support case on the Red Hat Customer Portal . Attach all other data gathered from this procedure to your support case.
[ "./openshift-install version", "./openshift-install 4.16.0 built from commit abc123def456 release image quay.io/openshift-release-dev/ocp-release@sha256:123abc456def789ghi012jkl345mno678pqr901stu234vwx567yz0 release architecture amd64", "oc adm release info <release_image> -o jsonpath=\"{ .metadata.metadata}\" 1", "{\"release.openshift.io architecture\":\"multi\"}", "sudo dnf install /usr/bin/nmstatectl -y", "mkdir ~/<directory_name>", "cat << EOF > ./<directory_name>/install-config.yaml apiVersion: v1 baseDomain: test.example.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker replicas: 0 controlPlane: architecture: amd64 hyperthreading: Enabled name: master replicas: 1 metadata: name: sno-cluster 2 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/16 networkType: OVNKubernetes 3 serviceNetwork: - 172.30.0.0/16 platform: 4 none: {} pullSecret: '<pull_secret>' 5 sshKey: '<ssh_pub_key>' 6 EOF", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5", "cat > agent-config.yaml << EOF apiVersion: v1beta1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: 2 - hostname: master-0 3 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 rootDeviceHints: 4 deviceName: /dev/sdb networkConfig: 5 interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.111.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.2 next-hop-interface: eno1 table-id: 254 EOF", "mkdir <installation_directory>/openshift", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install agent create cluster-manifests --dir <installation_directory>", "cd <installation_directory>/cluster-manifests", "cd ../mirror", "openshift-install agent create cluster-manifests --dir <installation_directory>", "cd <installation_directory>/cluster-manifests", "diskEncryption: enableOn: all 1 mode: tang 2 tangServers: \"server1\": \"http://tang-server-1.example.com:7500\" 3", "openshift-install --dir <install_directory> agent create image", "virt-install --name <vm_name> --autostart --memory=<memory> --cpu host --vcpus=<vcpus> --cdrom <agent_iso_image> \\ 1 --disk pool=default,size=<disk_pool_size> --network network:default,mac=<mac_address> --graphics none --noautoconsole --os-variant rhel9.0 --wait=-1", "./openshift-install --dir <install_directory> agent wait-for bootstrap-complete \\ 1 --log-level=info 2", "................................................................ ................................................................ INFO Bootstrap configMap status is complete INFO cluster bootstrap is complete", "openshift-install --dir <install_directory> agent wait-for install-complete 1", "................................................................ ................................................................ INFO Cluster is installed INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run INFO export KUBECONFIG=/home/core/installer/auth/kubeconfig INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno-cluster.test.example.com", "apiVIP: 192.168.11.3 ingressVIP: 192.168.11.4 clusterDeploymentRef: name: mycluster imageSetRef: name: openshift-4.16 networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: test-agent-cluster-install namespace: cluster0 spec: clusterDeploymentRef: name: ostest imageSetRef: name: openshift-4.16 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 1 workerAgents: 0 sshPublicKey: <ssh_public_key>", "apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: ostest namespace: cluster0 spec: baseDomain: test.metalkube.org clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: test-agent-cluster-install version: v1beta1 clusterName: ostest controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.16 spec: releaseImage: registry.ci.openshift.org/ocp/release:4.16.0-0.nightly-2022-06-06-025509", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: myinfraenv namespace: cluster0 spec: clusterRef: name: ostest namespace: cluster0 cpuArchitecture: aarch64 pullSecretRef: name: pull-secret sshAuthorizedKey: <ssh_public_key> nmStateConfigLabelSelector: matchLabels: cluster0-nmstate-label-name: cluster0-nmstate-label-value", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false dns-resolver: config: server: - 192.168.122.1 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 next-hop-interface: eth0 table-id: 254 interfaces: - name: \"eth0\" macAddress: 52:54:01:aa:aa:a1", "apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: cluster0 stringData: .dockerconfigjson: <pull_secret>", "./openshift-install --dir <installation_directory> agent wait-for bootstrap-complete --log-level=debug", "ERROR Bootstrap failed to complete: : bootstrap process timed out: context deadline exceeded", "ssh core@<node-ip> agent-gather -O >agent-gather.tar.xz", "./openshift-install --dir <install_directory> agent wait-for install-complete --log-level=debug", "export KUBECONFIG=<install_directory>/auth/kubeconfig", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_an_on-premise_cluster_with_the_agent-based_installer/installing-with-agent-based-installer
1.4. High-availability Service Management
1.4. High-availability Service Management High-availability service management provides the ability to create and manage high-availability cluster services in a Red Hat cluster. The key component for high-availability service management in a Red Hat cluster, rgmanager , implements cold failover for off-the-shelf applications. In a Red Hat cluster, an application is configured with other cluster resources to form a high-availability cluster service. A high-availability cluster service can fail over from one cluster node to another with no apparent interruption to cluster clients. Cluster-service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another (for example, for a planned outage of a cluster node). To create a high-availability service, you must configure it in the cluster configuration file. A cluster service comprises cluster resources . Cluster resources are building blocks that you create and manage in the cluster configuration file - for example, an IP address, an application initialization script, or a Red Hat GFS shared partition. You can associate a cluster service with a failover domain . A failover domain is a subset of cluster nodes that are eligible to run a particular cluster service (refer to Figure 1.10, "Failover Domains" ). Note Failover domains are not required for operation. A cluster service can run on only one cluster node at a time to maintain data integrity. You can specify failover priority in a failover domain. Specifying failover priority consists of assigning a priority level to each node in a failover domain. The priority level determines the failover order - determining which node that a cluster service should fail over to. If you do not specify failover priority, a cluster service can fail over to any node in its failover domain. Also, you can specify if a cluster service is restricted to run only on nodes of its associated failover domain. (When associated with an unrestricted failover domain, a cluster service can start on any cluster node in the event no member of the failover domain is available.) In Figure 1.10, "Failover Domains" , Failover Domain 1 is configured to restrict failover within that domain; therefore, Cluster Service X can only fail over between Node A and Node B. Failover Domain 2 is also configured to restrict failover with its domain; additionally, it is configured for failover priority. Failover Domain 2 priority is configured with Node C as priority 1, Node B as priority 2, and Node D as priority 3. If Node C fails, Cluster Service Y fails over to Node B . If it cannot fail over to Node B, it tries failing over to Node D. Failover Domain 3 is configured with no priority and no restrictions. If the node that Cluster Service Z is running on fails, Cluster Service Z tries failing over to one of the nodes in Failover Domain 3. However, if none of those nodes is available, Cluster Service Z can fail over to any node in the cluster. Figure 1.10. Failover Domains Figure 1.11, "Web Server Cluster Service Example" shows an example of a high-availability cluster service that is a web server named "content-webserver". It is running in cluster node B and is in a failover domain that consists of nodes A, B, and D. In addition, the failover domain is configured with a failover priority to fail over to node D before node A and to restrict failover to nodes only in that failover domain. The cluster service comprises these cluster resources: IP address resource - IP address 10.10.10.201. An application resource named "httpd-content" - a web server application init script /etc/init.d/httpd (specifying httpd ). A file system resource - Red Hat GFS named "gfs-content-webserver". Figure 1.11. Web Server Cluster Service Example Clients access the cluster service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. The httpd-content application uses the gfs-content-webserver file system. If node B were to fail, the content-webserver cluster service would fail over to node D. If node D were not available or also failed, the service would fail over to node A. Failover would occur with no apparent interruption to the cluster clients. The cluster service would be accessible from another cluster node via the same IP address as it was before failover.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-service-management-overview-CSO
7.2. Configure Bonding Using the Text User Interface, nmtui
7.2. Configure Bonding Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure bonding in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 7.1. The NetworkManager Text User Interface Add a Bond Connection menu Select Bond and then Create ; the Edit connection screen for the bond will open. Figure 7.2. The NetworkManager Text User Interface Configuring a Bond Connection menu At this point port interfaces will need to be added to the bond; to add these select Add , the New Connection screen opens. Once the type of Connection has been chosen select the Create button. Figure 7.3. The NetworkManager Text User Interface Configuring a New Bond Slave Connection menu The port's Edit Connection display appears; enter the required port's device name or MAC address in the Device section. If required, enter a clone MAC address to be used as the bond's MAC address by selecting Show to the right of the Ethernet label. Select the OK button to save the port. Note If the device is specified without a MAC address the Device section will be automatically populated once the Edit Connection window is reloaded, but only if it successfully finds the device. Figure 7.4. The NetworkManager Text User Interface Configuring a Bond Slave Connection menu The name of the bond port appears in the Slaves section. Repeat the above steps to add further port connections. Review and confirm the settings before selecting the OK button. Figure 7.5. The NetworkManager Text User Interface Completed Bond See Section 7.8.1.1, "Configuring the Bond Tab" for definitions of the bond terms. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui .
[ "~]USD nmtui" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configure_bonding_using_the_text_user_interface_nmtui
Chapter 9. Creating flavors for launching instances
Chapter 9. Creating flavors for launching instances An instance flavor is a resource template that specifies the virtual hardware profile for the instance. Cloud users must specify a flavor when they launch an instance. A flavor can specify the quantity of the following resources the Compute service must allocate to an instance: The number of vCPUs. The RAM, in MB. The root disk, in GB. The virtual storage, including secondary ephemeral storage and swap disk. You can specify who can use flavors by making the flavor public to all projects, or private to specific projects or domains. Flavors can use metadata, also referred to as "extra specs", to specify instance hardware support and quotas. The flavor metadata influences the instance placement, resource usage limits, and performance. For a complete list of available metadata properties, see Flavor metadata . You can also use the flavor metadata keys to find a suitable host aggregate to host the instance, by matching the extra_specs metadata set on the host aggregate. To schedule an instance on a host aggregate, you must scope the flavor metadata by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. For more information, see Creating and managing host aggregates . A Red Hat OpenStack Platform (RHOSP) deployment includes the following set of default public flavors that your cloud users can use. Table 9.1. Default Flavors Name vCPUs RAM Root Disk Size m1.nano 1 128 MB 1 GB m1.micro 1 192 MB 1 GB Note Behavior set using flavor properties override behavior set using images. When a cloud user launches an instance, the properties of the flavor they specify override the properties of the image they specify. 9.1. Creating a flavor You can create and manage specialized flavors for specific functionality or behaviors, for example: Change default memory and capacity to suit the underlying hardware needs. Add metadata to force a specific I/O rate for the instance or to match a host aggregate. Procedure Create a flavor that specifies the basic resources to make available to an instance: Replace <size_mb> with the size of RAM to allocate to an instance created with this flavor. Replace <size_gb> with the size of root disk to allocate to an instance created with this flavor. Replace <no_vcpus> with the number of vCPUs to reserve for an instance created with this flavor. Optional: Specify the --private and --project options to make the flavor accessible only by a particular project or group of users. Replace <project_id> with the ID of the project that can use this flavor to create instances. If you do not specify the accessibility, the flavor defaults to public, which means that it is available to all projects. Note You cannot make a public flavor private after it has been created. Replace <flavor_name> with a unique name for your flavor. For more information about flavor arguments, see Flavor arguments . Optional: To specify flavor metadata, set the required properties by using key-value pairs: Replace <key> with the metadata key of the property you want to allocate to an instance that is created with this flavor. For a list of available metadata keys, see Flavor metadata . Replace <value> with the value of the metadata key you want to allocate to an instance that is created with this flavor. Replace <flavor_name> with the name of your flavor. For example, an instance that is launched by using the following flavor has two CPU sockets, each with two CPUs: 9.2. Flavor arguments The openstack flavor create command has one positional argument, <flavor_name> , to specify the name of your new flavor. The following table details the optional arguments that you can specify as required when you create a new flavor. Table 9.2. Optional flavor arguments Optional argument Description --id Unique ID for the flavor. The default value, auto , generates a UUID4 value. You can use this argument to manually specify an integer or UUID4 value. --ram (Mandatory) Size of memory to make available to the instance, in MB. Default: 256 MB --disk (Mandatory) Amount of disk space to use for the root (/) partition, in GB. The root disk is an ephemeral disk that the base image is copied into. When an instance boots from a persistent volume, the root disk is not used. Note Creation of an instance with a flavor that has --disk set to 0 requires that the instance boots from volume. Default: 0 GB --ephemeral Amount of disk space to use for the ephemeral disks, in GB. Defaults to 0 GB, which means that no secondary ephemeral disk is created. Ephemeral disks offer machine local disk storage linked to the lifecycle of the instance. Ephemeral disks are not included in any snapshots. This disk is destroyed and all data is lost when the instance is deleted. Default: 0 GB --swap Swap disk size in MB. Do not specify swap in a flavor if the Compute service back end storage is not local storage. Default: 0 GB --vcpus (Mandatory) Number of virtual CPUs for the instance. Default: 1 --public The flavor is available to all projects. By default, a flavor is public and available to all projects. --private The flavor is only available to the projects specified by using the --project option. If you create a private flavor but add no projects to it then the flavor is only available to the cloud administrator. --property Metadata, or "extra specs", specified by using key-value pairs in the following format: --property <key=value> Repeat this option to set multiple properties. --project Specifies the project that can use the private flavor. You must use this argument with the --private option. If you do not specify any projects, the flavor is visible only to the admin user. Repeat this option to allow access to multiple projects. --project-domain Specifies the project domain that can use the private flavor. You must use this argument with the --private option. Repeat this option to allow access to multiple project domains. --description Description of the flavor. Limited to 65535 characters in length. You can use only printable characters. 9.3. Flavor metadata Use the --property option to specify flavor metadata when you create a flavor. Flavor metadata is also referred to as extra specs . Flavor metadata determines instance hardware support and quotas, which influence instance placement, instance limits, and performance. Instance resource usage Use the property keys in the following table to configure limits on CPU, memory and disk I/O usage by instances. Note The extra specs for limiting instance CPU resource usage are host-specific tunable properties that are passed directly to libvirt, which then passes the limits onto the host OS. Therefore, the supported instance CPU resource limits configurations are dependent on the underlying host OS. For more information on how to configure instance CPU resource usage for the Compute nodes in your RHOSP deployment, see Understanding cgroups in the RHEL 9 documentation, and CPU Tuning in the Libvirt documentation. Table 9.3. Flavor metadata for resource usage Key Description quota:cpu_shares Specifies the proportional weighted share of CPU time for the domain. Defaults to the OS provided defaults. The Compute scheduler weighs this value relative to the setting of this property on other instances in the same domain. For example, an instance that is configured with quota:cpu_shares=2048 is allocated double the CPU time as an instance that is configured with quota:cpu_shares=1024 . quota:cpu_period Specifies the period of time within which to enforce the cpu_quota , in microseconds. Within the cpu_period , each vCPU cannot consume more than cpu_quota of runtime. Set to a value in the range 1000 - 1000000. Set to 0 to disable. quota:cpu_quota Specifies the maximum allowed bandwidth for the vCPU in each cpu_period , in microseconds: Set to a value in the range 1000 - 18446744073709551. Set to 0 to disable. Set to a negative value to allow infinite bandwidth. You can use cpu_quota and cpu_period to ensure that all vCPUs run at the same speed. For example, you can use the following flavor to launch an instance that can consume a maximum of only 50% CPU of a physical CPU computing capability: Instance disk tuning Use the property keys in the following table to tune the instance disk performance. Note The Compute service applies the following quality of service settings to storage that the Compute service has provisioned, such as ephemeral storage. To tune the performance of Block Storage (cinder) volumes, you must also configure and associate a Quality of Service (QoS) specification for the volume type. For more information, see Block Storage service (cinder) Quality of Service specifications in the Storage Guide . Table 9.4. Flavor metadata for disk tuning Key Description quota:disk_read_bytes_sec Specifies the maximum disk reads available to an instance, in bytes per second. quota:disk_read_iops_sec Specifies the maximum disk reads available to an instance, in IOPS. quota:disk_write_bytes_sec Specifies the maximum disk writes available to an instance, in bytes per second. quota:disk_write_iops_sec Specifies the maximum disk writes available to an instance, in IOPS. quota:disk_total_bytes_sec Specifies the maximum I/O operations available to an instance, in bytes per second. quota:disk_total_iops_sec Specifies the maximum I/O operations available to an instance, in IOPS. Instance network traffic bandwidth Use the property keys in the following table to configure bandwidth limits on the instance network traffic by configuring the VIF I/O options. Note The quota :vif_* properties are deprecated. Instead, you should use the Networking (neutron) service Quality of Service (QoS) policies. For more information about QoS policies, see Configuring Quality of Service (QoS) policies in the Networking Guide . The quota:vif_* properties are only supported when you use the ML2/OVS mechanism driver with NeutronOVSFirewallDriver set to iptables_hybrid . Table 9.5. Flavor metadata for bandwidth limits Key Description quota:vif_inbound_average (Deprecated) Specifies the required average bit rate on the traffic incoming to the instance, in kbps. quota:vif_inbound_burst (Deprecated) Specifies the maximum amount of incoming traffic that can be burst at peak speed, in KB. quota:vif_inbound_peak (Deprecated) Specifies the maximum rate at which the instance can receive incoming traffic, in kbps. quota:vif_outbound_average (Deprecated) Specifies the required average bit rate on the traffic outgoing from the instance, in kbps. quota:vif_outbound_burst (Deprecated) Specifies the maximum amount of outgoing traffic that can be burst at peak speed, in KB. quota:vif_outbound_peak (Deprecated) Specifies the maximum rate at which the instance can send outgoing traffic, in kbps. Hardware video RAM Use the property key in the following table to configure limits on the instance RAM to use for video devices. Table 9.6. Flavor metadata for video devices Key Description hw_video:ram_max_mb Specifies the maximum RAM to use for video devices, in MB. Use with the hw_video_ram image property. hw_video_ram must be less than or equal to hw_video:ram_max_mb . Watchdog behavior Use the property key in the following table to enable the virtual hardware watchdog device on the instance. Table 9.7. Flavor metadata for watchdog behavior Key Description hw:watchdog_action Specify to enable the virtual hardware watchdog device and set its behavior. Watchdog devices perform the configured action if the instance hangs or fails. The watchdog uses the i6300esb device, which emulates a PCI Intel 6300ESB. If hw:watchdog_action is not specified, the watchdog is disabled. Set to one of the following valid values: disabled : (Default) The device is not attached. reset : Force instance reset. poweroff : Force instance shut down. pause : Pause the instance. none : Enable the watchdog, but do nothing if the instance hangs or fails. Note Watchdog behavior that you set by using the properties of a specific image override behavior that you set by using flavors. Random number generator (RNG) Use the property keys in the following table to enable the RNG device on the instance. Table 9.8. Flavor metadata for RNG Key Description hw_rng:allowed Set to False to disable the RNG device that is added to the instance through its image properties. Default: True hw_rng:rate_bytes Specifies the maximum number of bytes that the instance can read from the entropy of the host, per period. hw_rng:rate_period Specifies the duration of the read period in milliseconds. Virtual Performance Monitoring Unit (vPMU) Use the property key in the following table to enable the vPMU for the instance. Table 9.9. Flavor metadata for vPMU Key Description hw:pmu Set to True to enable a vPMU for the instance. Tools such as perf use the vPMU on the instance to provide more accurate information to profile and monitor instance performance. For realtime workloads, the emulation of a vPMU can introduce additional latency which might be undesirable. If the telemetry it provides is not required, set hw:pmu=False . Instance CPU topology Use the property keys in the following table to define the topology of the processors in the instance. Table 9.10. Flavor metadata for CPU topology Key Description hw:cpu_sockets Specifies the preferred number of sockets for the instance. Default: the number of vCPUs requested hw:cpu_cores Specifies the preferred number of cores per socket for the instance. Default: 1 hw:cpu_threads Specifies the preferred number of threads per core for the instance. Default: 1 hw:cpu_max_sockets Specifies the maximum number of sockets that users can select for their instances by using image properties. Example: hw:cpu_max_sockets=2 hw:cpu_max_cores Specifies the maximum number of cores per socket that users can select for their instances by using image properties. hw:cpu_max_threads Specifies the maximum number of threads per core that users can select for their instances by using image properties. Serial ports Use the property key in the following table to configure the number of serial ports per instance. Table 9.11. Flavor metadata for serial ports Key Description hw:serial_port_count Maximum serial ports per instance. CPU pinning policy By default, instance virtual CPUs (vCPUs) are sockets with one core and one thread. You can use properties to create flavors that pin the vCPUs of instances to the physical CPU cores (pCPUs) of the host. You can also configure the behavior of hardware CPU threads in a simultaneous multithreading (SMT) architecture where one or more cores have thread siblings. Use the property keys in the following table to define the CPU pinning policy of the instance. Table 9.12. Flavor metadata for CPU pinning Key Description hw:cpu_policy Specifies the CPU policy to use. Set to one of the following valid values: shared : (Default) The instance vCPUs float across host pCPUs. dedicated : Pin the instance vCPUs to a set of host pCPUs. This creates an instance CPU topology that matches the topology of the CPUs to which the instance is pinned. This option implies an overcommit ratio of 1.0. hw:cpu_thread_policy Specifies the CPU thread policy to use when hw:cpu_policy=dedicated . Set to one of the following valid values: prefer : (Default) The host might or might not have an SMT architecture. If an SMT architecture is present, the Compute scheduler gives preference to thread siblings. isolate : The host must not have an SMT architecture or must emulate a non-SMT architecture. This policy ensures that the Compute scheduler places the instance on a host without SMT by requesting hosts that do not report the HW_CPU_HYPERTHREADING trait. It is also possible to request this trait explicitly by using the following property: If the host does not have an SMT architecture, the Compute service places each vCPU on a different core as expected. If the host does have an SMT architecture, then the behaviour is determined by the configuration of the [workarounds]/disable_fallback_pcpu_query parameter: True : The host with an SMT architecture is not used and scheduling fails. False : The Compute service places each vCPU on a different physical core. The Compute service does not place vCPUs from other instances on the same core. All but one thread sibling on each used core is therefore guaranteed to be unusable. require : The host must have an SMT architecture. This policy ensures that the Compute scheduler places the instance on a host with SMT by requesting hosts that report the HW_CPU_HYPERTHREADING trait. It is also possible to request this trait explicitly by using the following property: The Compute service allocates each vCPU on thread siblings. If the host does not have an SMT architecture, then it is not used. If the host has an SMT architecture, but not enough cores with free thread siblings are available, then scheduling fails. Instance PCI NUMA affinity policy Use the property key in the following table to create flavors that specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Table 9.13. Flavor metadata for PCI NUMA affinity policy Key Description hw:pci_numa_affinity_policy Specifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values: required : The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. preferred : The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If this is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy : (Default) The Compute service creates instances that request a PCI device in one of the following cases: The PCI device has affinity with at least one of the NUMA nodes. The PCI devices do not provide information about their NUMA affinities. Instance NUMA topology You can use properties to create flavors that define the host NUMA placement for the instance vCPU threads, and the allocation of instance vCPUs and memory from the host NUMA nodes. Defining a NUMA topology for the instance improves the performance of the instance OS for flavors whose memory and vCPU allocations are larger than the size of NUMA nodes in the Compute hosts. The Compute scheduler uses these properties to determine a suitable host for the instance. For example, a cloud user launches an instance by using the following flavor: The Compute scheduler searches for a host that has two NUMA nodes, one with 3GB of RAM and the ability to run six CPUs, and the other with 1GB of RAM and two CPUS. If a host has a single NUMA node with capability to run eight CPUs and 4GB of RAM, the Compute scheduler does not consider it a valid match. Note NUMA topologies defined by a flavor cannot be overridden by NUMA topologies defined by the image. The Compute service raises an ImageNUMATopologyForbidden error if the image NUMA topology conflicts with the flavor NUMA topology. Caution You cannot use this feature to constrain instances to specific host CPUs or NUMA nodes. Use this feature only after you complete extensive testing and performance measurements. You can use the hw:pci_numa_affinity_policy property instead. Use the property keys in the following table to define the instance NUMA topology. Table 9.14. Flavor metadata for NUMA topology Key Description hw:numa_nodes Specifies the number of host NUMA nodes to restrict execution of instance vCPU threads to. If not specified, the vCPU threads can run on any number of the available host NUMA nodes. hw:numa_cpus.N A comma-separated list of instance vCPUs to map to instance NUMA node N. If this key is not specified, vCPUs are evenly divided among available NUMA nodes. N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes. This property is valid only if you have set hw:numa_nodes , and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads. hw:numa_mem.N The number of MB of instance memory to map to instance NUMA node N. If this key is not specified, memory is evenly divided among available NUMA nodes. N starts from 0. Use *.N values with caution, and only if you have at least two NUMA nodes. This property is valid only if you have set hw:numa_nodes , and is required only if the NUMA nodes of the instance have an asymmetrical allocation of CPUs and RAM, which is important for some NFV workloads. Warning If the combined values of hw:numa_cpus.N or hw:numa_mem.N are greater than the available number of CPUs or memory respectively, the Compute service raises an exception. CPU real-time policy Use the property keys in the following table to define the real-time policy of the processors in the instance. Note Although most of your instance vCPUs can run with a real-time policy, you must mark at least one vCPU as non-real-time to use for both non-real-time guest processes and emulator overhead processes. To use this extra spec, you must enable pinned CPUs. Table 9.15. Flavor metadata for CPU real-time policy Key Description hw:cpu_realtime Set to yes to create a flavor that assigns a real-time policy to the instance vCPUs. Default: no hw:cpu_realtime_mask Specifies the vCPUs to not assign a real-time policy to. You must prepend the mask value with a caret symbol (^). The following example indicates that all vCPUs except vCPUs 0 and 1 have a real-time policy: Note If the hw_cpu_realtime_mask property is set on the image then it takes precedence over the hw:cpu_realtime_mask property set on the flavor. Emulator threads policy You can assign a pCPU to an instance to use for emulator threads. Emulator threads are emulator processes that are not directly related to the instance. A dedicated emulator thread pCPU is required for real-time workloads. To use the emulator threads policy, you must enable pinned CPUs by setting the following property: Use the property key in the following table to define the emulator threads policy of the instance. Table 9.16. Flavor metadata for the emulator threads policy Key Description hw:emulator_threads_policy Specifies the emulator threads policy to use for instances. Set to one of the following valid values: share : The emulator thread floats across the pCPUs defined in the NovaComputeCpuSharedSet heat parameter. If NovaComputeCpuSharedSet is not configured, then the emulator thread floats across the pinned CPUs that are associated with the instance. isolate : Reserves an additional dedicated pCPU per instance for the emulator thread. Use this policy with caution, as it is prohibitively resource intensive. unset: (Default) The emulator thread policy is not enabled, and the emulator thread floats across the pinned CPUs associated with the instance. Instance memory page size Use the property keys in the following table to create an instance with an explicit memory page size. Table 9.17. Flavor metadata for memory page size Key Description hw:mem_page_size Specifies the size of large pages to use to back the instances. Use of this option creates an implicit NUMA topology of 1 NUMA node unless otherwise specified by hw:numa_nodes . Set to one of the following valid values: large : Selects a page size larger than the smallest page size supported on the host, which can be 2 MB or 1 GB on x86_64 systems. small : Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the largest available huge page size, as determined by the libvirt driver. <pagesize> : (String) Sets an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB , 2MB , 2048 , 1GB . unset: (Default) Large pages are not used to back instances and no implicit NUMA topology is generated. PCI passthrough Use the property key in the following table to attach a physical PCI device, such as a graphics card or a network device, to an instance. For more information about using PCI passthrough, see Configuring PCI passthrough . Table 9.18. Flavor metadata for PCI passthrough Key Description pci_passthrough:alias Specifies the PCI device to assign to an instance by using the following format: Replace <alias> with the alias that corresponds to a particular PCI device class. Replace <count> with the number of PCI devices of type <alias> to assign to the instance. Hypervisor signature Use the property key in the following table to hide the hypervisor signature from the instance. Table 9.19. Flavor metadata for hiding hypervisor signature Key Description hide_hypervisor_id Set to True to hide the hypervisor signature from the instance, to allow all drivers to load and work on the instance. UEFI Secure Boot Use the property key in the following table to create an instance that is protected with UEFI Secure Boot. Note Instances with UEFI Secure Boot must support UEFI and the GUID Partition Table (GPT) standard, and include an EFI system partition. Table 9.20. Flavor metadata for UEFI Secure Boot Key Description os:secure_boot Set to required to enable Secure Boot for instances launched with this flavor. Disabled by default. Instance resource traits Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. An instance can specify which of these traits it requires. The traits that you can specify are defined in the os-traits library. Example traits include the following: COMPUTE_TRUSTED_CERTS COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG COMPUTE_IMAGE_TYPE_RAW HW_CPU_X86_AVX HW_CPU_X86_AVX512VL HW_CPU_X86_AVX512CD For details about how to use the os-traits library, see https://docs.openstack.org/os-traits/latest/user/index.html . Use the property key in the following table to define the resource traits of the instance. Table 9.21. Flavor metadata for resource traits Key Description trait:<trait_name> Specifies Compute node traits. Set the trait to one of the following valid values: required : The Compute node selected to host the instance must have the trait. forbidden : The Compute node selected to host the instance must not have the trait. Example: Instance bare-metal resource class Use the property key in the following table to request a bare-metal resource class for an instance. Table 9.22. Flavor metadata for bare-metal resource class Key Description resources:<resource_class_name> Use this property to specify standard bare-metal resource classes to override the values of, or to specify custom bare-metal resource classes that the instance requires. The standard resource classes that you can override are VCPU , MEMORY_MB and DISK_GB . To prevent the Compute scheduler from using the bare-metal flavor properties for scheduling instance, set the value of the standard resource classes to 0 . The name of custom resource classes must start with CUSTOM_ . To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_. For example, to schedule instances on a node that has --resource-class baremetal.SMALL , create the following flavor:
[ "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_vcpus> [--private --project <project_id>] <flavor_name>", "(overcloud)USD openstack flavor set --property <key=value> --property <key=value> ... <flavor_name>", "(overcloud)USD openstack flavor set --property hw:cpu_sockets=2 --property hw:cpu_cores=2 processor_topology_flavor", "openstack flavor set cpu_limits_flavor --property quota:cpu_quota=10000 --property quota:cpu_period=20000", "--property trait:HW_CPU_HYPERTHREADING=forbidden", "--property trait:HW_CPU_HYPERTHREADING=required", "openstack flavor set numa_top_flavor --property hw:numa_nodes=2 --property hw:numa_cpus.0=0,1,2,3,4,5 --property hw:numa_cpus.1=6,7 --property hw:numa_mem.0=3072 --property hw:numa_mem.1=1024", "openstack flavor set <flavor> --property hw:cpu_realtime=\"yes\" --property hw:cpu_realtime_mask=^0-1", "--property hw:cpu_policy=dedicated", "<alias>:<count>", "openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required avx512-flavor", "openstack flavor set --property resources:CUSTOM_BAREMETAL_SMALL=1 --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-small" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_creating-flavors-for-launching-instances_instance-flavors
Chapter 2. Installing the MTA extension for Visual Studio Code
Chapter 2. Installing the MTA extension for Visual Studio Code You can install the MTA extension for Visual Studio Code (VS Code). Prerequisites The following are the prerequisites for the Migration Toolkit for Applications (MTA) installation: Java Development Kit (JDK) is installed. MTA supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. Procedure Set the environmental variable JAVA_HOME : USD export JAVA_HOME=jdk11 In VS Code, click the Extensions icon on the Activity bar to open the Extensions view. Enter Migration Toolkit for Applications in the Search field. Select the Migration Toolkit for Applications extension and click Install . The MTA extension icon ( ) is displayed on the Activity bar.
[ "export JAVA_HOME=jdk11" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/visual_studio_code_extension_guide/installing-vs-code-extension_vsc-extension-guide
Chapter 51. MongoDB Sink
Chapter 51. MongoDB Sink Send documents to MongoDB. This Kamelet expects a JSON as body. Properties you can set as headers: db-upsert / ce-dbupsert : if the database should create the element if it does not exist. Boolean value. 51.1. Configuration Options The following table summarizes the configuration options available for the mongodb-sink Kamelet: Property Name Description Type Default Example collection * MongoDB Collection Sets the name of the MongoDB collection to bind to this endpoint. string database * MongoDB Database Sets the name of the MongoDB database to target. string hosts * MongoDB Hosts Comma separated list of MongoDB Host Addresses in host:port format. string createCollection Collection Create collection during initialisation if it doesn't exist. boolean false password MongoDB Password User password for accessing MongoDB. string username MongoDB Username Username for accessing MongoDB. string writeConcern Write Concern Configure the level of acknowledgment requested from MongoDB for write operations, possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED, MAJORITY. string Note Fields marked with an asterisk (*) are mandatory. 51.2. Dependencies At runtime, the mongodb-sink Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:mongodb camel:jackson 51.3. Usage This section describes how you can use the mongodb-sink . 51.3.1. Knative Sink You can use the mongodb-sink Kamelet as a Knative sink by binding it to a Knative object. mongodb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" 51.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 51.3.1.2. Procedure for using the cluster CLI Save the mongodb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mongodb-sink-binding.yaml 51.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts" This command creates the KameletBinding in the current namespace on the cluster. 51.3.2. Kafka Sink You can use the mongodb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. mongodb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" 51.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 51.3.2.2. Procedure for using the cluster CLI Save the mongodb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mongodb-sink-binding.yaml 51.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p "sink.collection=The MongoDB Collection" -p "sink.database=The MongoDB Database" -p "sink.hosts=The MongoDB Hosts" This command creates the KameletBinding in the current namespace on the cluster. 51.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mongodb-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\"", "apply -f mongodb-sink-binding.yaml", "kamel bind channel:mychannel mongodb-sink -p \"sink.collection=The MongoDB Collection\" -p \"sink.database=The MongoDB Database\" -p \"sink.hosts=The MongoDB Hosts\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-sink properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\"", "apply -f mongodb-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mongodb-sink -p \"sink.collection=The MongoDB Collection\" -p \"sink.database=The MongoDB Database\" -p \"sink.hosts=The MongoDB Hosts\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/mongodb-sink
C.5. Glock Holders
C.5. Glock Holders Table C.5, "Glock holder flags" shows the meanings of the different glock holder flags. Table C.5. Glock holder flags Flag Name Meaning a Async Do not wait for glock result (will poll for result later) A Any Any compatible lock mode is acceptable c No cache When unlocked, demote DLM lock immediately e No expire Ignore subsequent lock cancel requests E Exact Must have exact lock mode F First Set when holder is the first to be granted for this lock H Holder Indicates that requested lock is granted p Priority Enqueue holder at the head of the queue t Try A "try" lock T Try 1CB A "try" lock that sends a callback W Wait Set while waiting for request to complete The most important holder flags are H (holder) and W (wait) as mentioned earlier, since they are set on granted lock requests and queued lock requests respectively. The ordering of the holders in the list is important. If there are any granted holders, they will always be at the head of the queue, followed by any queued holders. If there are no granted holders, then the first holder in the list will be the one that triggers the state change. Since demote requests are always considered higher priority than requests from the file system, that might not always directly result in a change to the state requested. The glock subsystem supports two kinds of "try" lock. These are useful both because they allow the taking of locks out of the normal order (with suitable back-off and retry) and because they can be used to help avoid resources in use by other nodes. The normal t (try) lock is just what its name indicates; it is a "try" lock that does not do anything special. The T ( try 1CB ) lock, on the other hand, is identical to the t lock except that the DLM will send a single callback to current incompatible lock holders. One use of the T ( try 1CB ) lock is with the iopen locks, which are used to arbitrate among the nodes when an inode's i_nlink count is zero, and determine which of the nodes will be responsible for deallocating the inode. The iopen glock is normally held in the shared state, but when the i_nlink count becomes zero and ->delete_inode () is called, it will request an exclusive lock with T ( try 1CB ) set. It will continue to deallocate the inode if the lock is granted. If the lock is not granted it will result in the node(s) which were preventing the grant of the lock marking their glock(s) with the D (demote) flag, which is checked at ->drop_inode () time in order to ensure that the deallocation is not forgotten. This means that inodes that have zero link count but are still open will be deallocated by the node on which the final close () occurs. Also, at the same time as the inode's link count is decremented to zero the inode is marked as being in the special state of having zero link count but still in use in the resource group bitmap. This functions like the ext3 file system3's orphan list in that it allows any subsequent reader of the bitmap to know that there is potentially space that might be reclaimed, and to attempt to reclaim it.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ap-glock-holders-gfs2
5.2. Physical Volume Administration
5.2. Physical Volume Administration This section describes the commands that perform the various aspects of physical volume administration. 5.2.1. Creating Physical Volumes The following subsections describe the commands used for creating physical volumes. 5.2.1.1. Setting the Partition Type If you are using a whole disk device for your physical volume, the disk must have no partition table. For DOS disk partitions, the partition id should be set to 0x8e using the fdisk or cfdisk command or an equivalent. For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. You can remove an existing partition table by zeroing the first sector with the following command: 5.2.1.2. Initializing Physical Volumes Use the pvcreate command to initialize a block device to be used as a physical volume. Initialization is analogous to formatting a file system. The following command initializes /dev/sdd , /dev/sde , and /dev/sdf as LVM physical volumes for later use as part of LVM logical volumes. To initialize partitions rather than whole disks: run the pvcreate command on the partition. The following example initializes the partition /dev/hdb1 as an LVM physical volume for later use as part of an LVM logical volume. 5.2.1.3. Scanning for Block Devices You can scan for block devices that may be used as physical volumes with the lvmdiskscan command, as shown in the following example.
[ "dd if=/dev/zero of= PhysicalVolume bs=512 count=1", "pvcreate /dev/sdd /dev/sde /dev/sdf", "pvcreate /dev/hdb1", "lvmdiskscan /dev/ram0 [ 16.00 MB] /dev/sda [ 17.15 GB] /dev/root [ 13.69 GB] /dev/ram [ 16.00 MB] /dev/sda1 [ 17.14 GB] LVM physical volume /dev/VolGroup00/LogVol01 [ 512.00 MB] /dev/ram2 [ 16.00 MB] /dev/new_vg/lvol0 [ 52.00 MB] /dev/ram3 [ 16.00 MB] /dev/pkl_new_vg/sparkie_lv [ 7.14 GB] /dev/ram4 [ 16.00 MB] /dev/ram5 [ 16.00 MB] /dev/ram6 [ 16.00 MB] /dev/ram7 [ 16.00 MB] /dev/ram8 [ 16.00 MB] /dev/ram9 [ 16.00 MB] /dev/ram10 [ 16.00 MB] /dev/ram11 [ 16.00 MB] /dev/ram12 [ 16.00 MB] /dev/ram13 [ 16.00 MB] /dev/ram14 [ 16.00 MB] /dev/ram15 [ 16.00 MB] /dev/sdb [ 17.15 GB] /dev/sdb1 [ 17.14 GB] LVM physical volume /dev/sdc [ 17.15 GB] /dev/sdc1 [ 17.14 GB] LVM physical volume /dev/sdd [ 17.15 GB] /dev/sdd1 [ 17.14 GB] LVM physical volume 7 disks 17 partitions 0 LVM physical volume whole disks 4 LVM physical volumes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/physvol_admin
10.6. Using Multiple Importers
10.6. Using Multiple Importers When you define the metadata import type for a model, you can also define a comma-separated list of importers. By doing so, you will ensure that all of the repository instances defined by import types are consulted in the order in which they have been defined. Here is an example: <vdb name="{vdb-name}" version="1"> <model name="{model-name}" type="PHYSICAL"> <source name="AccountsDB" translator-name="oracle" connection-jndi-name="java:/oracleDS"/> <metadata type="NATIVE,DDL"> **DDL Here** </metadata> </model> </vdb> In this model, the NATIVE importer is used first, then the DDL importer is used to add additional metadata to the NATIVE-imported metadata.
[ "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"PHYSICAL\"> <source name=\"AccountsDB\" translator-name=\"oracle\" connection-jndi-name=\"java:/oracleDS\"/> <metadata type=\"NATIVE,DDL\"> **DDL Here** </metadata> </model> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/using_multiple_importers
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.2_release_notes/making-open-source-more-inclusive
Chapter 11. Communicating among containers
Chapter 11. Communicating among containers Learn about establishing communication between containers, applications, and host systems leveraging port mapping, DNS resolution, or orchestrating communication within pods. 11.1. The network modes and layers There are several different network modes in Podman: bridge - creates another network on the default bridge network container:<id> - uses the same network as the container with <id> id host - uses the host network stack network-id - uses a user-defined network created by the podman network create command private - creates a new network for the container slirp4nets - creates a user network stack with slirp4netns, the default option for rootless containers none - create a network namespace for the container but do not configure network interfaces for it. The container has no network connectivity. ns:<path> - path to a network namespace to join Note The host mode gives the container full access to local system services such as D-bus, a system for interprocess communication (IPC), and is therefore considered insecure. 11.2. Inspecting a network settings of a container Use the podman inspect command with the --format option to display individual items from the podman inspect output. Prerequisites The container-tools module is installed. Procedure Display the IP address of a container: Display all networks to which container is connected: Display port mappings: Additional resources podman-inspect man page on your system 11.3. Communicating between a container and an application You can communicate between a container and an application. An application ports are in either listening or open state. These ports are automatically exposed to the container network, therefore, you can reach those containers using these networks. By default, the web server listens on port 80. Using this procedure, the myubi container communicates with the web-container application. Prerequisites The container-tools module is installed. Procedure Start the container named web-container : List all containers: Inspect the container and display the IP address: Run the myubi container and verify that web server is running: 11.4. Communicating between a container and a host By default, the podman network is a bridge network. It means that a network device is bridging a container network to your host network. Prerequisites The container-tools module is installed. The web-container is running. For more information, see section Communicating between a container and an application . Procedure Verify that the bridge is configured: Display the host network configuration: You can see that the web-container has an IP of the cni-podman0 network and the network is bridged to the host. Inspect the web-container and display its IP address: Access the web-container directly from the host: Additional resources podman-network man page on your system 11.5. Communicating between containers using port mapping The most convenient way to communicate between two containers is to use published ports. Ports can be published in two ways: automatically or manually. Prerequisites The container-tools module is installed. Procedure Run the unpublished container: Run the automatically published container: Run the manually published container and publish container port 80: List all containers: You can see that: Container web1 has no published ports and can be reached only by container network or a bridge. Container web2 has automatically mapped ports 43595 and 42423 to publish the application ports 8080 and 8443, respectively. Note The automatic port mapping is possible because the registry.access.redhat.com/8/httpd-24 image has the EXPOSE 8080 and EXPOSE 8443 commands in the Containerfile . Container web3 has a manually published port. The host port 8888 is mapped to the container port 8080. Display the IP addresses of web1 and web3 containers: Reach web1 container using <IP>:<port> notation: Reach web2 container using localhost:<port> notation: Reach web3 container using <IP>:<port> notation: 11.6. Communicating between containers using DNS When a DNS plugin is enabled, use a container name to address containers. Prerequisites The container-tools module is installed. A network with the enabled DNS plugin has been created using the podman network create command. Procedure Run a receiver container attached to the mynet network: Run a sender container and reach the receiver container by its name: Exit using the CTRL+C . You can see that the sender container can ping the receiver container using its name. 11.7. Communicating between two containers in a pod All containers in the same pod share the IP addresses, MAC addresses and port mappings. You can communicate between containers in the same pod using localhost:port notation. Prerequisites The container-tools module is installed. Procedure Create a pod named web-pod : Run the web container named web-container in the pod: List all pods and containers associated with them: Run the container in the web-pod based on the docker.io/library/fedora image: You can see that the container can reach the web-container . 11.8. Communicating in a pod You must publish the ports for the container in a pod when a pod is created. Prerequisites The container-tools module is installed. Procedure Create a pod named web-pod : List all pods: Run the web container named web-container inside the web-pod : List containers Verify that the web-container can be reached: 11.9. Attaching a pod to the container network Attach containers in pod to the network during the pod creation. Prerequisites The container-tools module is installed. Procedure Create a network named pod-net : Create a pod web-pod : Run a container named web-container inside the web-pod : Optional: Display the pods the containers are associated with: Verification Show all networks connected to the container: 11.10. Setting HTTP Proxy variables for Podman To pull images behind a proxy server, you must set HTTP Proxy variables for Podman. Podman reads the environment variable HTTP_PROXY to ascertain the HTTP Proxy information. HTTP proxy information can be configured as an environment variable or under /etc/profile.d . Procedure Set proxy variables for Podman. For example: Unauthenticated proxy: Authenticated proxy:
[ "podman inspect --format='{{.NetworkSettings.IPAddress}}' <containerName>", "podman inspect --format='{{.NetworkSettings.Networks}}' <containerName>", "podman inspect --format='{{.NetworkSettings.Ports}}' <containerName>", "podman run -dt --name=web-container docker.io/library/httpd", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b8c057333513 docker.io/library/httpd:latest httpd-foreground 4 seconds ago Up 5 seconds ago web-container", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web-container 10.88.0.2", "podman run -it --name=myubi ubi8/ubi curl 10.88.0.2:80 <html><body><h1>It works!</h1></body></html>", "podman network inspect podman | grep bridge \"bridge\": \"cni-podman0\", \"type\": \"bridge\"", "ip addr show cni-podman0 6: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 62:af:a1:0a:ca:2e brd ff:ff:ff:ff:ff:ff inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0 valid_lft forever preferred_lft forever inet6 fe80::60af:a1ff:fe0a:ca2e/64 scope link valid_lft forever preferred_lft forever", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web-container 10.88.0.2", "curl 10.88.0.2:80 <html><body><h1>It works!</h1></body></html>", "podman run -dt --name=web1 ubi8/httpd-24", "podman run -dt --name=web2 -P ubi8/httpd-24", "podman run -dt --name=web3 -p 8888:8080 ubi8/httpd-24", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES db23e8dabc74 registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 23 seconds ago Up 23 seconds 8080/tcp, 8443/tcp web1 1824b8f0a64b registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 18 seconds ago Up 18 seconds 0.0.0.0:33127->8080/tcp, 0.0.0.0:37679->8443/tcp web2 39de784d917a registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 5 seconds ago Up 5 seconds 0.0.0.0:8888->8080/tcp, 8443/tcp web3", "podman inspect --format='{{.NetworkSettings.IPAddress}}' web1 podman inspect --format='{{.NetworkSettings.IPAddress}}' web3", "10.88.0.2:8080 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "curl localhost:43595 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "curl 10.88.0.4:8080 <title>Test Page for the HTTP Server on Red Hat Enterprise Linux</title>", "podman run -d --net mynet --name receiver ubi8 sleep 3000", "podman run -it --rm --net mynet --name sender alpine ping receiver PING rcv01 (10.89.0.2): 56 data bytes 64 bytes from 10.89.0.2: seq=0 ttl=42 time=0.041 ms 64 bytes from 10.89.0.2: seq=1 ttl=42 time=0.125 ms 64 bytes from 10.89.0.2: seq=2 ttl=42 time=0.109 ms", "podman pod create --name=web-pod", "podman container run -d --pod web-pod --name=web-container docker.io/library/httpd", "podman ps --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 58653cf0cf09 k8s.gcr.io/pause:3.5 4 minutes ago Up 3 minutes ago 4e61a300c194-infra 4e61a300c194 web-pod b3f4255afdb3 docker.io/library/httpd:latest httpd-foreground 3 minutes ago Up 3 minutes ago web-container 4e61a300c194 web-pod", "podman container run -it --rm --pod web-pod docker.io/library/fedora curl localhost <html><body><h1>It works!</h1></body></html>", "podman pod create --name=web-pod-publish -p 80:80", "podman pod ls POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS 26fe5de43ab3 publish-pod Created 5 seconds ago 7de09076d2b3 1", "podman container run -d --pod web-pod-publish --name=web-container docker.io/library/httpd", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7de09076d2b3 k8s.gcr.io/pause:3.5 About a minute ago Up 23 seconds ago 0.0.0.0:80->80/tcp 26fe5de43ab3-infra 088befb90e59 docker.io/library/httpd httpd-foreground 23 seconds ago Up 23 seconds ago 0.0.0.0:80->80/tcp web-container", "curl localhost:80 <html><body><h1>It works!</h1></body></html>", "podman network create pod-net /etc/cni/net.d/pod-net.conflist", "podman pod create --net pod-net --name web-pod", "podman run -d --pod webt-pod --name=web-container docker.io/library/httpd", "podman ps -p CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME b7d6871d018c registry.access.redhat.com/ubi8/pause:latest 9 minutes ago Up 6 minutes ago a8e7360326ba-infra a8e7360326ba web-pod 645835585e24 docker.io/library/httpd:latest httpd-foreground 6 minutes ago Up 6 minutes ago web-container a8e7360326ba web-pod", "podman ps --format=\"{{.Networks}}\" pod-net", "cat /etc/profile.d/unauthenticated_http_proxy.sh export HTTP_PROXY=http://192.168.0.1:3128 export HTTPS_PROXY=http://192.168.0.1:3128 export NO_PROXY=example.com,172.5.0.0/16", "cat /etc/profile.d/authenticated_http_proxy.sh export HTTP_PROXY=http://USERNAME:[email protected]:3128 export HTTPS_PROXY=http://USERNAME:[email protected]:3128 export NO_PROXY=example.com,172.5.0.0/16" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_communicating-among-containers_building-running-and-managing-containers
Chapter 13. Change requests in Business Central
Chapter 13. Change requests in Business Central If you have more than one branch in a Business Central project and you make a change in a branch that you want to merge to another branch, you can create a change request. Any user with permission to view the target branch, usually the master branch, can see the change request. 13.1. Creating change requests You can create a change request in a Business Central project after you have made a change in your project, for example after you have added or deleted an attribute to an asset. Prerequisites You have more than one branch of a Business Central project. You made a change in one branch that you want to merge to another branch. Procedure In Business Central, go to Menu Design Projects and select the space and project that contains the change that you want to merge. On the project page, select the branch that contains the change. Figure 13.1. Select a branch menu Do one of the following tasks to submit the change request: Click in the upper-right corner of the screen and select Submit Change Request . Click the Change Requests tab and then click Submit Change Request . The Submit Change Request window appears. Enter a summary and a description, select the target branch, and click Submit . The target branch is the branch where the change will be merged. After you click Submit , the change request window appears. 13.2. Working with change requests You can view change requests for any branch that you have access to. You must have administrator permissions to accept a change request. Prerequisites You have more than one branch of a Business Central project. Procedure In Business Central, go to Menu Design Projects and select a space and project. On the project page, verify that you are on the correct branch. Click the Change Requests tab. A list of pending change requests appears. To filter change requests, select Open , Closed , or All to the left of the Search box. To search for specific change requests, enter an ID or text in the Search box and click the magnifying glass. To view the change request details, click the summary link. The change request window has two tabs: Review the Overview tab for general information about the change request. Click the Changed Files tab and expand a file to review the proposed changes. Click a button in the top right corner. Click Squash and Merge to squash all commits into a single commit and merge the commit to the target branch. Click Merge to merge the changes into the target branch. Click Reject to reject the changes and leave the target branch unchanged. Click Close to close the change request without rejecting or accepting it. Note that only the user who created the submitted the change request can close it. Click Cancel to return to the project window without making any changes.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/change-requests-con_managing-projects
Part I. Adding a Single Linux System to an Active Directory Domain
Part I. Adding a Single Linux System to an Active Directory Domain This part describes how the System Security Services Daemon ( SSSD ) works with an Active Directory ( AD ) domain, how to use the realmd system to achieve direct domain integration, and finally, how to use Samba for AD integration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/adding-linux-to-ad
10.3. Updating CA-KRA Connector Information After Cloning
10.3. Updating CA-KRA Connector Information After Cloning As covered in Section 2.7.9, "Custom Configuration and Clones" , configuration information is not updated in clone instances if it is made after the clone is created. Likewise, changes made to a clone are not copied back to the master instance. If a new KRA is installed or cloned after a clone CA is created, then the clone CA does not have the new KRA connector information in its configuration. This means that the clone CA is not able to send any archival requests to the KRA. Whenever a new KRA is created or cloned, copy its connector information into all of the cloned CAs in the deployment. To do this, use the pki ca-kraconnector-add command. If it is required to do this manually, follow these steps: On the master clone machine, open the master CA's CS.cfg file, and copy all of the ca.connector.KRA.* lines for the new KRA connector. Stop the clone CA instance. For example: Open the clone CA's CS.cfg file. Copy in the connector information for the new KRA instance or clone. Start the clone CA.
[ "vim /var/lib/pki/ instance_name /ca/conf/CS.cfg", "pki-server stop instance_name", "vim /var/lib/pki/ instance_name /ca/conf/CS.cfg", "ca.connector.KRA.enable=true ca.connector.KRA.host=server-kra.example.com ca.connector.KRA.local=false ca.connector.KRA.nickName=subsystemCert cert-pki-ca ca.connector.KRA.port=10444 ca.connector.KRA.timeout=30 ca.connector.KRA.transportCert=MIIDbD...ZR0Y2zA== ca.connector.KRA.uri=/kra/agent/kra/connector", "pki-server start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/clone-kra-cxn
Chapter 158. KafkaRebalanceStatus schema reference
Chapter 158. KafkaRebalanceStatus schema reference Used in: KafkaRebalance Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. sessionId string The session identifier for requests to Cruise Control pertaining to this KafkaRebalance resource. This is used by the Kafka Rebalance operator to track the status of ongoing rebalancing operations. optimizationResult map A JSON object describing the optimization result.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaRebalanceStatus-reference
Chapter 4. Timestamp Functions
Chapter 4. Timestamp Functions Each timestamp function returns a value to indicate when a function is executed. These returned values can then be used to indicate when an event occurred, provide an ordering for events, or compute the amount of time elapsed between two time stamps.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/timestamp_stp
8.11. amtu
8.11. amtu 8.11.1. RHBA-2014:0639 - amtu bug fix update Updated amtu package that fixes three bugs is now available for Red Hat Enterprise Linux 6. The Abstract Machine Test Utility (AMTU) is an administrative utility to verify that the underlying protection mechanisms of the system hardware are being enforced correctly. Bug Fixes BZ# 689823 Previously, Abstract Machine Test Utility (AMTU) did not handle the name of the interface correctly under certain circumstances. As a consequence, AMTU failed to obtain a list of network interfaces to test. With this update, the interface hardware type and carriers are obtained from the /sys/class/net/ directory. Now, only an Ethernet and a token ring can be used, and a carrier must be present. As a result, AMTU handles the new network interface names as expected. BZ# 723049 Prior to this update, AMTU ran network tests on interfaces configured with a static IP that did not have an existing connection, causing those tests to fail. With this update, AMTU no longer runs tests on interfaces that are not up. BZ# 1098076 Previously, the name of the network interface was restricted to 4 characters on 32-bit systems and 8 characters on 64-bit system due to using the sizeof() operator instead of the strlen() function. As a consequence, AMTU did not correctly display the full network interface name in certain portions of the output. A patch has been applied to address this bug, and AMTU now always displays the full network interface name as expected. Users of amtu are advised to upgrade to the updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/amtu
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications. Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.3. Finding available storage devices (optional) This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating Persistent Volumes (PV) for IBM Z. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique by-id device name for each available raw block device. Example output: In this example, for bmworker01 , the available local device is sdb . Identify the unique ID for each of the devices selected in Step 2. In the above example, the ID for the local device sdb Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Enabling DASD devices If you are using DASD devices, you must enable them before creating an OpenShift Data Foundation cluster on IBM Z. Once the DASD devices are available to z/VM guests, complete the following steps from the compute or infrastructure node on which an OpenShift Data Foundation storage node is being installed. Procedure To enable the DASD device, run the following command: 1 For <device_bus_id>, specify the ID of the DASD device bus-ID. For example, 0.0.b100 . To verify the status of the DASD device you can use the the lsdasd and lsblk commands. To low-level format the device and specify the disk name, run the following command: 1 For <device_name>, specify the disk name. For example, dasdb . Important The use of DASD quick-formatting Extent Space Efficient (ESE) DASD is not supported on OpenShift Data Foundation. If you are using ESE DASDs, make sure to disable quick-formatting with the --mode=full parameter. To auto-create one partition using the whole disk, run the following command: 1 For <device_name>, enter the disk name you have specified in the step. For example, dasdb . Once these steps are completed, the device is available during OpenShift Data Foundation deployment as /dev/dasdb1 . Important During LocalVolumeSet creation, make sure to select only the Part option as device type. Additional resources For details on the commands, see Commands for Linux on IBM Z in IBM documentation. 2.5. Creating OpenShift Data Foundation cluster on IBM Z Use this procedure to create an OpenShift Data Foundation cluster on IBM Z. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM(R) LinuxONE. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices for Backing storage type option. Select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVME . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. By default, Disk and Part options are included in the Device Type field. Note For a multi-path device, select the Mpath option from the drop-down exclusively. For a DASD-based cluster, ensure that only the Part option is included in the Device Type and remove the Disk option. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. You can check the box to select Taint nodes. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery(Regional-DR only) checkbox, else click . In the Review and create page:: Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2", "oc debug node/<node name>", "oc debug node/bmworker01 Starting pod/bmworker01-debug To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk", "sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb", "scsi-0x60050763808104bc2800000000000259", "sudo chzdev -e <device_bus_id> 1", "sudo dasdfmt /dev/<device_name> -b 4096 -p --mode=full 1", "sudo fdasd -a /dev/<device_name> 1", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_z/deploy-using-local-storage-devices-ibmz
Chapter 1. Documentation moved
Chapter 1. Documentation moved The OpenShift sandboxed containers user guide and release notes have moved to a new location .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/openshift_sandboxed_containers/sandboxed-containers-moved
Chapter 2. Ceph Object Gateway and the S3 API
Chapter 2. Ceph Object Gateway and the S3 API As a developer, you can use a RESTful application programing interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in Red Hat Ceph Storage cluster through the Ceph Object Gateway. 2.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 2.2. S3 limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single PUT is 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection. Additional Resources See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields . 2.3. Accessing the Ceph Object Gateway with the S3 API As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API. 2.3.1. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. A RESTful client. 2.3.2. S3 authentication Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs. For most use cases, clients use existing open source libraries like the Amazon SDK's AmazonS3Client for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too. Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach. Example In the above example, replace ACCESS_KEY with the value for the access key ID followed by a colon ( : ). Replace HASH_OF_HEADER_AND_SECRET with a hash of a canonicalized header string and the secret corresponding to the access key ID. Generate hash of header string and secret To generate the hash of the header string and secret: Get the value of the header string. Normalize the request header string into canonical form. Generate an HMAC using a SHA-1 hashing algorithm. Encode the hmac result as base-64. Normalize header To normalize the header into canonical form: Get all content- headers. Remove all content- headers except for content-type and content-md5 . Ensure the content- header names are lowercase. Sort the content- headers lexicographically. Ensure you have a Date header AND ensure the specified date uses GMT and not an offset. Get all headers beginning with x-amz- . Ensure that the x-amz- headers are all lowercase. Sort the x-amz- headers lexicographically. Combine multiple instances of the same field name into a single field and separate the field values with a comma. Replace white space and line breaks in header values with a single space. Remove white space before and after colons. Append a new line after each header. Merge the headers back into the request header. Replace the HASH_OF_HEADER_AND_SECRET with the base-64 encoded HMAC string. Additional Resources For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 2.3.3. S3 server-side encryption The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programing interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators may disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, setting it to false in the Ceph configuration file and restarting the gateway instance, or setting it to false in the Ansible configuration files and replaying the Ansible playbooks for the Ceph Object Gateway. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. There are two options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 2.3.4. S3 access control lists Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object: Table 2.1. User Operations Permission Bucket Object READ Grantee can list the objects in the bucket. Grantee can read the object. WRITE Grantee can write or delete objects in the bucket. N/A READ_ACP Grantee can read bucket ACL. Grantee can read the object ACL. WRITE_ACP Grantee can write bucket ACL. Grantee can write to the object ACL. FULL_CONTROL Grantee has full permissions for object in the bucket. Grantee can read or write to the object ACL. 2.3.5. Preparing access to the Ceph Object Gateway using S3 You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server. Warning DO NOT modify the Ceph configuration file to use port 80 and let Civetweb use the default Ansible configured port of 8080 . Prerequisites Installation of the Ceph Object Gateway software. Root-level access to the Ceph Object Gateway node. Procedure As root , open port 8080 on firewall: Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide . You can also set up the gateway node for local DNS caching. To do so, execute the following steps: As root , install and setup dnsmasq : Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. As root , stop NetworkManager: As root , set the gateway server's IP as the nameserver: Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. Verify subdomain requests: Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Warning Setting up the gateway server for local DNS caching is for testing purposes only. You won't be able to access outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node. Create the radosgw user for S3 access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generated access_key and secret_key . You will need these keys for S3 access and subsequent bucket management tasks. 2.3.6. Accessing the Ceph Object Gateway using Ruby AWS S3 You can use Ruby programming language along with aws-s3 gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3 . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and it's essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-s3 Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that was generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true it would mean that bucket my-new-bucket1 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example: my-new-bucket4 , my-new-bucket5 . , edit the above mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting non-empty buckets: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 2.3.7. Accessing the Ceph Object Gateway using Ruby AWS SDK You can use the Ruby programming language along with aws-sdk gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and it's essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-sdk Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that was generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true , this means that bucket my-new-bucket2 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example: my-new-bucket6 , my-new-bucket7 . , edit the above mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting a non-empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 2.3.8. Accessing the Ceph Object Gateway using PHP You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object. Important The examples given below are tested against php v5.4.16 and aws-sdk v2.8.24 . DO NOT use the latest version of aws-sdk for php as it requires php >= 5.5+ . php 5.5 is not available in the default repositories of RHEL 7 . If you want to use php 5.5 , you will have to enable epel and other third party repositories. Also, the configuration options for php 5.5 and latest version of aws-sdk are different. Prerequisites Root-level access to a development workstation. Internet access. Procedure Install the php package: Download the zip archive of aws-sdk for PHP and extract it. Create a project directory: Copy the extracted aws directory to the project directory. For example: Create the connection file: Paste the following contents in the conn.php file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that was generated when creating the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Replace PATH_TO_AWS with the absolute path to the extracted aws directory that you copied to the php project directory. Save the file and exit the editor. Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the new file: Syntax Save the file and exit the editor. Run the file: Create a new file for listing owned buckets: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output should look similar to this: Create an object by first creating a source file named hello.txt : Create a new php file: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will create the object hello.txt in bucket my-new-bucket3 . Create a new file for listing a bucket's content: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output will look similar to this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.php file to create empty buckets, for example: my-new-bucket4 , my-new-bucket5 . , edit the above mentioned del_empty_bucket.php file accordingly before trying to delete empty buckets. Important Deleting a non-empty bucket is currently not supported in PHP 2 and newer versions of aws-sdk . Create a new file for deleting an object: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will delete the object hello.txt . 2.3.9. Accessing the Ceph Object Gateway using AWS CLI You can use the AWS CLI for S3 access. This procedure provides steps for installing AWS CLI and some example commands to perform various tasks, such as deleting an object from an MFA-Delete enabled bucket. Prerequisites User-level access to Ceph Object Gateway. Root-level access to a development workstation. A multi-factor authentication (MFA) TOTP token was created using radosgw-admin mfa create Procedure Install the awscli package: Configure awscli to access Ceph Object Storage using AWS CLI: Syntax Replace MY_PROFILE_NAME with the name you want to use to identify this profile. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that was generated when creating the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Create an alias to point to the FQDN of your Ceph Object Gateway node: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Example Create a new bucket: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with a name for your new bucket. Example List owned buckets: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Example Configure a bucket for MFA-Delete: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your new bucket. Replace TOTP_SERIAL with the string the represents the ID for the TOTP token and replace TOTP_PIN with the current pin displayed on your MFA authentication device. The TOTP_SERIAL is the string that was specified when you created the radosgw user for S3. See the Creating a new multi-factor authentication TOTP token section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on creating a MFA TOTP token. See the Creating a seed for multi-factor authentication using oathtool section in the Red Hat Ceph Storage Developer Guide for details on creating a MFA seed with oathtool. Example View MFA-Delete status of the bucket versioning state: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your new bucket. Example Add an object to the MFA-Delete enabled bucket: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your new bucket. Replace OBJECT_KEY with the name that will uniquely identify the object in a bucket. Replace LOCAL_FILE with the name of the local file to upload. Example List the object versions for a specific object: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your new bucket. Replace OBJECT_KEY with the name that was specified to uniquely identify the object in a bucket. Example Delete an object in an MFA-Delete enabled bucket: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your bucket that contains the object to delete. Replace OBJECT_KEY with the name that uniquely identifies the object in a bucket. Replace VERSION_ID with the VersionID of the specific version of the object you want to delete. Replace TOTP_SERIAL with the string that represents the ID for the TOTP token and TOTP_PIN to the current pin displayed on your MFA authentication device. Example If the MFA token is not included, the request fails with the error shown below. Example List object versions to verify object was deleted from MFA-Delete enabled bucket: Syntax Replace MY_PROFILE_NAME with the name you created to use this profile. Replace BUCKET_NAME with the name of your bucket. Replace OBJECT_KEY with the name that uniquely identifies the object in a bucket. Example 2.3.10. Creating a seed for multi-factor authentication using the oathtool command To set up multi-factor authentication (MFA), you must create a secret seed for use by the time-based one time password (TOTP) generator and the back-end MFA system. You can use oathtool to generate the hexadecimal seed and optionally qrencode to create a QR code to import the token into your MFA device. Prerequisites A Linux system. Access to the command line shell. root or sudo access to the Linux system. Procedure Install the oathtool package: Install the qrencode package: Generate a 30 character seed from the urandom Linux device file and store it in the shell variable SEED : Example Print the seed by running echo on the SEED variable: Example Feed the SEED into the oathtool command: Syntax Example Note The base32 secret is needed to add a token to the authenticator application on your MFA device. You can either use the QR code to import the token into the authenticator application or use the base32 secret manually add it. Optional: Create a QR code image file to add the token to the authenticator: Syntax Replace TOTP_SERIAL with the string that represents the ID for the (TOTP) token and BASE32_SECRET with the Base32 secret generated by oathtool. Example Scan the generated QR code image file to add the token to the authenticator application on your MFA device. Create the multi-factor authentication TOTP token for the user using the radowgw-admin command. Additional Resources See the Creating a new multi-factor authentication TOTP token section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on creating an MFA TOTP token. 2.3.11. Secure Token Service The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. The Ceph Object Gateway implements a subset of the STS application programming interfaces (APIs) to provide temporary credentials for identity and access management (IAM). Using these temporary credentials authenticates S3 calls by utilizing the STS engine in the Ceph Object Gateway. You can restrict temporary credentials even further by using an IAM policy, which is a parameter passed to the STS APIs. Additional Resources Amazon Web Services Secure Token Service welcome page . See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone. See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone. 2.3.11.1. The Secure Token Service application programming interfaces The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs): AssumeRole This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ExternalId Description When assuming a role for another account, provide the unique external identifier if available. This parameter's value has a length of 2 to 1224 characters. Type String Required No SerialNumber Description A user's identification number from their associated multi-factor authentication (MFA) device. The parameter's value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters. Type String Required No TokenCode Description The value generated from the multi-factor authentication (MFA) device, if the trust policy requires a MFA. If a MFA device is required, and if this parameter's value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter's value has a fixed length of 6 characters. Type String Required No AssumeRoleWithWebIdentity This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ProviderId Description The fully qualified host component of the domain name from the identity provider. This parameter's value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters. Type String Required No WebIdentityToken Description The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter's value has a length of 4 to 2048 characters. Type String Required No Additional Resources See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. Amazon Web Services Security Token Service, the AssumeRole action. Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action. 2.3.11.2. Configuring the Secure Token Service Configure the Secure Token Service (STS) for use with the Ceph Object Gateway using Ceph Ansible. Note The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway. Prerequisites A Ceph Ansible administration node. A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. Procedure Open for editing the group_vars/rgws.yml file. Add the following lines: Replace: STS_KEY with the key used to encrypted the session token. Save the changes to the group_vars/rgws.yml file. Rerun the appropriate Ceph Ansible playbook: Bare-metal deployments: Container deployments: Additional Resources See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. 2.3.11.3. Creating a user for an OpenID Connect provider To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy. Prerequisites User-level access to the Ceph Object Gateway node. Procedure Create a new Ceph user: Syntax Example Configure the Ceph user capabilities: Syntax Example Add a condition to the role trust policy using the Secure Token Service (STS) API: Syntax Important The app_id in the syntax example above must match the AUD_FIELD field of the incoming token. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 2.3.11.4. Obtaining a thumbprint of an OpenID Connect provider To get the OpenID Connect provider's (IDP) configuration document. Prerequisites Installation of the openssl and curl packages. Procedure Get the configuration document from the IDP's URL: Syntax Example Get the IDP certificate: Syntax Example Copy the result of the "x5c" response from the command and paste it into the certificate.crt file. Include --BEGIN CERTIFICATE-- at the beginning and --END CERTIFICATE-- at the end. Get the certificate thumbprint: Syntax Example Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 2.3.11.5. Configuring and using STS Lite with Keystone (Technology Preview) The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options. Note Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway. Prerequisites Red Hat Ceph Storage 3.2 or higher. A running Ceph Object Gateway. Installation of the Boto Python module, version 3 or higher. Procedure Open and edit the group_vars/rgws.yml file with the following options: Replace: STS_KEY with the key used to encrypted the session token. Rerun the appropriate Ceph Ansible playbook: Bare-metal deployments: Container deployments: Generate the EC2 credentials: Example Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API. Example Obtaining the temporary credentials can be used for making S3 calls: Example Create a new S3Access role and configure a policy. Assign a user with administrative CAPS: Syntax Example Create the S3Access role: Syntax Example Attach a permission policy to the S3Access role: Syntax Example Now another user can assume the role of the gwadmin user. For example, the gwuser user can assume the permissions of the gwadmin user. Make a note of the assuming user's access_key and secret_key values. Example Use the AssumeRole API call, providing the access_key and secret_key values from the assuming user: Example Important The AssumeRole API requires the S3Access role. Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information. 2.3.11.6. Working around the limitations of using STS Lite with Keystone (Technology Preview) A limitation with Keystone is that it does not supports STS requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified. Prerequisites A running Red Hat Ceph Storage cluster, version 3.2 or higher. A running Ceph Object Gateway. Installation of Boto Python module, version 3 or higher. Procedure Open and edit Boto's auth.py file. Add the following four lines to the code block: class SigV4Auth(BaseSigner): """ Sign a request with Signature V4. """ REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4 Add the following two lines to the code block: def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request) Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. 2.3.12. Session tags for Attribute-based access control (ABAC) in STS Session tags are key-value pairs that can be passed while federating a user. They are passed as aws:PrincipalTag in the session or temporary credentials that are returned back by secure token service (STS). These principal tags consist of session tags that come in as part of the web token and tags that are attached to the role being assumed. Note Currently, the session tags are only supported as part of the web token passed to AssumeRoleWithWebIdentity . The tags have to be always specified in the following namespace: https://aws.amazon.com/tags . Important The trust policy must have sts:TagSession permission if the web token passed in by the federated user contains session tags. Otherwise, the AssumeRoleWithWebIdentity action fails. Example of the trust policy with sts:TagSession : Properties The following are the properties of session tags: Session tags can be multi-valued. Note Multi-valued session tags are not supported in Amazon Web Service (AWS). Keycloak can be set up as an OpenID Connect Identity Provider (IDP) with a maximum of 50 session tags. The maximum size of a key allowed is 128 characters. The maximum size of a value allowed is 256 characters. The tag or the value cannot start with aws: . Additional Resources See the Secure Token Service section in the Red Hat Ceph Storage Developer Guide for more information about secure token service. 2.3.12.1. Tag keys The following are the tag keys that can be used in the role trust policy or the role permission policy. aws:RequestTag Description Compares the key-value pair passed in the request with the key-value pair in the role's trust policy. In the case of AssumeRoleWithWebIdentity , session tags can be used as aws:RequestTag in the role trust policy. Those session tags are passed by Keycloak in the web token. As a result, a federated user can assume a role. aws:PrincipalTag Description Compares the key-value pair attached to the principal with the key-value pair in the policy. In the case of AssumeRoleWithWebIdentity , session tags appear as principal tags in the temporary credentials once a user is authenticated. Those session tags are passed by Keycloak in the web token. They can be used as aws:PrincipalTag in the role permission policy. iam:ResourceTag Description Compares the key-value pair attached to the resource with the key-value pair in the policy. In the case of AssumeRoleWithWebIdentity , tags attached to the role are compared with those in the trust policy to allow a user to assume a role. Note The Ceph Object Gateway now supports RESTful APIs for tagging, listing tags, and untagging actions on a role. aws:TagKeys Description Compares tags in the request with the tags in the policy. In the case of AssumeRoleWithWebIdentity , tags are used to check the tag keys in a role trust policy or permission policy before a user is allowed to assume a role. s3:ResourceTag Description Compares tags present on the S3 resource, that is bucket or object, with the tags in the role's permission policy. It can be used for authorizing an S3 operation in the Ceph Object Gateway. However, this is not allowed in AWS. It is a key used to refer to tags that have been attached to an object or a bucket. Tags can be attached to an object or a bucket using RESTful APIs available for the same. 2.3.12.2. S3 resource tags The following list shows which S3 resource tag type is supported for authorizing a particular operation. Tag type: Object tags Operations GetObject , GetObjectTags , DeleteObjectTags , DeleteObject , PutACLs , InitMultipart , AbortMultipart , `ListMultipart , GetAttrs , PutObjectRetention , GetObjectRetention , PutObjectLegalHold , GetObjectLegalHold Tag type: Bucket tags Operations PutObjectTags , GetBucketTags , PutBucketTags , DeleteBucketTags , GetBucketReplication , DeleteBucketReplication , GetBucketVersioning , SetBucketVersioning , GetBucketWebsite , SetBucketWebsite , DeleteBucketWebsite , StatBucket , ListBucket , GetBucketLogging , GetBucketLocation , DeleteBucket , GetLC , PutLC , DeleteLC , GetCORS , PutCORS , GetRequestPayment , SetRequestPayment . PutBucketPolicy , GetBucketPolicy , DeleteBucketPolicy , PutBucketObjectLock , GetBucketObjectLock , GetBucketPolicyStatus , PutBucketPublicAccessBlock , GetBucketPublicAccessBlock , DeleteBucketPublicAccessBlock Tag type: Bucket tags for bucket ACLs, Object tags for object ACLs Operations GetACLs , PutACLs Tag type: Object tags of source object, Bucket tags of destination bucket Operations PutObject , CopyObject 2.4. S3 bucket operations As a developer, you can perform bucket operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for buckets, along with the function's support status. Table 2.2. Bucket operations Feature Status Notes List Buckets Supported Create a Bucket Supported Different set of canned ACLs. Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Put Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Delete Bucket Lifecycle Supported Get Bucket Objects Supported Bucket Location Supported Get Bucket Version Supported Put Bucket Version Supported Delete Bucket Supported Get Bucket ACLs Supported Different set of canned ACLs Put Bucket ACLs Supported Different set of canned ACLs Get Bucket cors Supported Put Bucket cors Supported Delete Bucket cors Supported List Bucket Object Versions Supported Head Bucket Supported List Bucket Multipart Uploads Supported Bucket Policies Partially Supported Get a Bucket Request Payment Supported Put a Bucket Request Payment Supported Multi-tenant Bucket Operations Supported 2.4.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 2.4.2. S3 create bucket notifications Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated and ObjectRemoved . These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations. To create a bucket notification for s3:objectCreate and s3:objectRemove events,use PUT: Example Important Red Hat supports ObjectCreate events, such as, put , post , multipartUpload , and copy . Red Hat also supports ObjectRemove events, such as, object_delete and s3_multi_object_delete . Request Entities NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description List of supported events. Multiple event entities can be used. If omitted, all events are handled. Type String Required No Filter Description S3Key , S3Metadata and S3Tags entities. Type Container Required No S3Key Description A list of FilterRule entities, for filtering based on the object key. At most, 3 entities may be in the list, for example Name would be prefix , suffix or regex . All filter rules in the list must match for the filter to match. Type Container Required No S3Metadata Description A list of FilterRule entities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. Type Container Required No S3Tags Description A list of FilterRule entities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. Type Container Required No S3Key.FilterRule Description Name and Value entities. Name is : prefix , suffix or regex . The Value would hold the key prefix, key suffix or a regular expression for matching the key, accordingly. Type Container Required Yes S3Metadata.FilterRule Description Name and Value entities. Name is the name of the metadata attribute for example x-amz-meta-xxx . The value is the expected value for this attribute. Type Container Required Yes S3Tags.FilterRule Description Name and Value entities. Name is the tag key, and the value is the tag value. Type Container Required Yes HTTP response 400 Status Code MalformedXML Description The XML is not well-formed. 400 Status Code InvalidArgument Description Missing Id or missing or invalid topic ARN or invalid event. 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The topic does not exist. id="s3-get-bucket-notifications_dev"] 2.4.3. S3 get bucket notifications Get a specific notification or list all the notifications configured on a bucket. Syntax Example Example Response Note The notification subresource returns the bucket notification configuration or an empty NotificationConfiguration element. The caller must be the bucket owner. Request Entities notification-id Description Name of the notification. All notifications are listed if the ID is not provided. Type String NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description Handled event. Multiple event entities may exist. Type String Required Yes Filter Description The filters for the specified configuration. Type Container Required No HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The notification does not exist if it has been provided. 2.4.4. S3 delete bucket notifications Delete a specific or all notifications from a bucket. Note Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete , is not considered an error. To delete a specific or all notifications use DELETE: Syntax Example Request Entities notification-id Description Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided. Type String HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 2.4.5. Accessing bucket host names There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI. Example The second method identifies the bucket via a virtual bucket host name. Example Tip Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards. 2.4.6. S3 list buckets GET / returns a list of buckets created by the user making the request. GET / only returns buckets created by an authenticated user. You cannot make an anonymous request. Syntax Table 2.3. Response Entities Name Type Description Buckets Container Container for list of buckets. Bucket Container Container for bucket information. Name String Bucket name. CreationDate Date UTC time when the bucket was created. ListAllMyBucketsResult Container A container for the result. Owner Container A container for the bucket owner's ID and DisplayName . ID String The bucket owner's ID. DisplayName String The bucket owner's display name. 2.4.7. S3 return a list of bucket objects Returns a list of bucket objects. Syntax Table 2.4. Parameters Name Type Description prefix String Only returns objects that contain the specified prefix. delimiter String The delimiter between the prefix and the rest of the object name. marker String A beginning index for the list of objects returned. max-keys Integer The maximum number of keys to return. Default is 1000. Table 2.5. HTTP Response HTTP Status Status Code Description 200 OK Buckets retrieved GET / BUCKET returns a container for buckets with the following fields: Table 2.6. Bucket Response Entities Name Type Description ListBucketResult Entity The container for the list of objects. Name String The name of the bucket whose contents will be returned. Prefix String A prefix for the object keys. Marker String A beginning index for the list of objects returned. MaxKeys Integer The maximum number of keys returned. Delimiter String If set, objects with the same prefix will appear in the CommonPrefixes list. IsTruncated Boolean If true , only a subset of the bucket's contents were returned. CommonPrefixes Container If multiple objects contain the same prefix, they will appear in this list. The ListBucketResult contains objects, where each object is within a Contents container. Table 2.7. Object Response Entities Name Type Description Contents Object A container for the object. Key String The object's key. LastModified Date The object's last-modified date/time. ETag String An MD-5 hash of the object. (entity tag) Size Integer The object's size. StorageClass String Should always return STANDARD . 2.4.8. S3 create a new bucket Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user. Constraints In general, bucket names should follow domain name constraints. Bucket names must be unique. Bucket names must begin and end with a lowercase letter. Bucket names can contain a dash (-). Syntax Table 2.8. Parameters Name Description Valid Values Required x-amz-acl Canned ACLs. private , public-read , public-read-write , authenticated-read No HTTP Response If the bucket name is unique, within constraints and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail. HTTP Status Status Code Description 409 BucketAlreadyExists Bucket already exists under different user's ownership. 2.4.9. S3 delete a bucket Deletes a bucket. You can reuse bucket names following a successful bucket removal. Syntax Table 2.9. HTTP Response HTTP Status Status Code Description 204 No Content Bucket removed. 2.4.10. S3 bucket lifecycle You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions: Expiration : This defines the lifespan of objects within a bucket. It takes the number of days the object should live or an expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn't enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. NoncurrentVersionExpiration : This defines the lifespan of non-current object versions within a bucket. To use this feature, the bucket must enable versioning. It takes the number of days a non-current object should live, at which point Ceph Object Gateway will delete the non-current object. AbortIncompleteMultipartUpload : This defines the number of days an incomplete multipart upload should live before it is aborted. The lifecycle configuration contains one or more rules using the <Rule> element. Example A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you specify in the lifecycle rule. You can specify a filter several ways: Key prefixes Object tags Both key prefix and one or more object tags Key prefixes You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/> would apply to objects that begin with keypre/ : You can also apply different lifecycle rules to objects with different key prefixes: Object tags You can apply a lifecycle rule to only objects with a specific tag using the <Key> and <Value> elements: Both prefix and one or more tags In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And> element. A filter can have only one prefix, and zero or more tags: Additional Resources See the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle . See the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle . See the Red Hat Ceph Storage Developer Guide for details to delete a bucket lifecycle . 2.4.11. S3 GET bucket lifecycle To get a bucket lifecycle, use GET and specify a destination bucket. Syntax Request Headers See the Common Request Headers for more information. Response The response contains the bucket lifecycle and its elements. 2.4.12. S3 create or replace a bucket lifecycle To create or replace a bucket lifecycle, use PUT and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality. Syntax Table 2.10. Request Headers Name Description Valid Values Required content-md5 A base64 encoded MD-5 hash of the message. A string. No defaults or constraints. No Additional Resources See the Red Hat Ceph Storage Developer Guide for details on common Amazon S3 request headers . See the Red Hat Ceph Storage Developer Guide for details on Amazon S3 bucket lifecycles . 2.4.13. S3 delete a bucket lifecycle To delete a bucket lifecycle, use DELETE and specify a destination bucket. Syntax Request Headers The request does not contain any special elements. Response The response returns common response status. Additional Resources See Appendix A for Amazon S3 common request headers. See Appendix B for Amazon S3 common response status codes. 2.4.14. S3 get bucket location Retrieves the bucket's zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint during a PUT request. Add the location subresource to bucket resource as shown below. Syntax Table 2.11. Response Entities Name Type Description LocationConstraint String The zone group where bucket resides, empty string for default zone group 2.4.15. S3 get bucket versioning Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this. Add the versioning subresource to bucket resource as shown below. Syntax 2.4.16. S3 put the bucket versioning This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value. Setting the bucket versioning state: Enabled : Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended : Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null. Syntax Table 2.12. Bucket Request Entities Name Type Description VersioningConfiguration container A container for the request. Status String Sets the versioning state of the bucket. Valid Values: Suspended/Enabled 2.4.17. S3 get bucket access control lists Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Table 2.13. Response Entities Name Type Description AccessControlPolicy Container A container for the response. AccessControlList Container A container for the ACL information. Owner Container A container for the bucket owner's ID and DisplayName . ID String The bucket owner's ID. DisplayName String The bucket owner's display name. Grant Container A container for Grantee and Permission . Grantee Container A container for the DisplayName and ID of the user receiving a grant of permission. Permission String The permission given to the Grantee bucket. 2.4.18. S3 put bucket Access Control Lists Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Table 2.14. Request Entities Name Type Description AccessControlPolicy Container A container for the request. AccessControlList Container A container for the ACL information. Owner Container A container for the bucket owner's ID and DisplayName . ID String The bucket owner's ID. DisplayName String The bucket owner's display name. Grant Container A container for Grantee and Permission . Grantee Container A container for the DisplayName and ID of the user receiving a grant of permission. Permission String The permission given to the Grantee bucket. 2.4.19. S3 get bucket cors Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 2.4.20. S3 put bucket cors Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 2.4.21. S3 delete a bucket cors Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 2.4.22. S3 list bucket object versions Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket. Add the versions subresource to the bucket request as shown below. Syntax You can specify parameters for GET / BUCKET ?versions , but none of them are required. Table 2.15. Parameters Name Type Description prefix String Returns in-progress uploads whose keys contains the specified prefix. delimiter String The delimiter between the prefix and the rest of the object name. key-marker String The beginning marker for the list of uploads. max-keys Integer The maximum number of in-progress uploads. The default is 1000. version-id-marker String Specifies the object version to begin the list. Table 2.16. Response Entities Name Type Description KeyMarker String The key marker specified by the key-marker request parameter (if any). NextKeyMarker String The key marker to use in a subsequent request if IsTruncated is true . NextUploadIdMarker String The upload ID marker to use in a subsequent request if IsTruncated is true . IsTruncated Boolean If true , only a subset of the bucket's upload contents were returned. Size Integer The size of the uploaded part. DisplayName String The owners's display name. ID String The owners's ID. Owner Container A container for the ID and DisplayName of the user who owns the object. StorageClass String The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Version Container Container for the version information. versionId String Version ID of an object. versionIdMarker String The last version of the key in a truncated response. 2.4.23. S3 head bucket Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK if the bucket exists and the caller has permissions; 404 Not Found if the bucket does not exist; and, 403 Forbidden if the bucket exists but the caller does not have access permissions. Syntax 2.4.24. S3 list multipart uploads GET /?uploads returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn't completed all the uploads yet. Syntax You can specify parameters for GET / BUCKET ?uploads , but none of them are required. Table 2.17. Parameters Name Type Description prefix String Returns in-progress uploads whose keys contains the specified prefix. delimiter String The delimiter between the prefix and the rest of the object name. key-marker String The beginning marker for the list of uploads. max-keys Integer The maximum number of in-progress uploads. The default is 1000. max-uploads Integer The maximum number of multipart uploads. The range from 1-1000. The default is 1000. version-id-marker String Ignored if key-marker isn't specified. Specifies the ID of first upload to list in lexicographical order at or following the ID . Table 2.18. Response Entities Name Type Description ListMultipartUploadsResult Container A container for the results. ListMultipartUploadsResult.Prefix String The prefix specified by the prefix request parameter (if any). Bucket String The bucket that will receive the bucket contents. KeyMarker String The key marker specified by the key-marker request parameter (if any). UploadIdMarker String The marker specified by the upload-id-marker request parameter (if any). NextKeyMarker String The key marker to use in a subsequent request if IsTruncated is true . NextUploadIdMarker String The upload ID marker to use in a subsequent request if IsTruncated is true . MaxUploads Integer The max uploads specified by the max-uploads request parameter. Delimiter String If set, objects with the same prefix will appear in the CommonPrefixes list. IsTruncated Boolean If true , only a subset of the bucket's upload contents were returned. Upload Container A container for Key , UploadId , InitiatorOwner , StorageClass , and Initiated elements. Key String The key of the object once the multipart upload is complete. UploadId String The ID that identifies the multipart upload. Initiator Container Contains the ID and DisplayName of the user who initiated the upload. DisplayName String The initiator's display name. ID String The initiator's ID. Owner Container A container for the ID and DisplayName of the user who owns the uploaded object. StorageClass String The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Initiated Date The date and time the user initiated the upload. CommonPrefixes Container If multiple objects contain the same prefix, they will appear in this list. CommonPrefixes.Prefix String The substring of the key after the prefix as defined by the prefix request parameter. 2.4.25. S3 bucket policies The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets. Creation and Removal Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin CLI tool. Administrators may use the s3cmd command to set or delete a policy. Example Limitations Ceph Object Gateway only supports the following S3 actions: s3:AbortMultipartUpload s3:CreateBucket s3:DeleteBucketPolicy s3:DeleteBucket s3:DeleteBucketWebsite s3:DeleteObject s3:DeleteObjectVersion s3:GetBucketAcl s3:GetBucketCORS s3:GetBucketLocation s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketVersioning s3:GetBucketWebsite s3:GetLifecycleConfiguration s3:GetObjectAcl s3:GetObject s3:GetObjectTorrent s3:GetObjectVersionAcl s3:GetObjectVersion s3:GetObjectVersionTorrent s3:ListAllMyBuckets s3:ListBucketMultiPartUploads s3:ListBucket s3:ListBucketVersions s3:ListMultipartUploadParts s3:PutBucketAcl s3:PutBucketCORS s3:PutBucketPolicy s3:PutBucketRequestPayment s3:PutBucketVersioning s3:PutBucketWebsite s3:PutLifecycleConfiguration s3:PutObjectAcl s3:PutObject s3:PutObjectVersionAcl Note Ceph Object Gateway does not support setting policies on users, groups, or roles. The Ceph Object Gateway uses the RGW 'tenant' identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users. With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket in the S3 request. In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users. Important Granting an entire account access to a bucket grants access to ALL users in that account. Bucket policies do NOT support string interpolation. Ceph Object Gateway supports the following condition keys: aws:CurrentTime aws:EpochTime aws:PrincipalType aws:Referer aws:SecureTransport aws:SourceIp aws:UserAgent aws:username Ceph Object Gateway ONLY supports the following condition keys for the ListBucket action: s3:prefix s3:delimiter s3:max-keys Impact on Swift Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that have been set with the S3 API govern Swift as well as S3 operations. Ceph Object Gateway matches Swift credentials against Principals specified in a policy. 2.4.26. S3 get the request payment configuration on a bucket Uses the requestPayment subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax 2.4.27. S3 set the request payment configuration on a bucket Uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax Table 2.19. Request Entities Name Type Description Payer Enum Specifies who pays for the download and request fees. RequestPaymentConfiguration Container A container for Payer . 2.4.28. Multi-tenant bucket operations When a client application accesses buckets, it always operates with credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi tenancy is completely backward compatible with releases, as long as the referred buckets and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. In the following example, a colon character separates tenant and bucket. Thus a sample URL would be: By contrast, a simple Python example separates the tenant and bucket in the bucket method itself: Example Note It's not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path format has to be used with multi-tenancy. Additional Resources See Multi Tenancy for additional details. 2.4.29. Additional Resources See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on configuring a bucket website. 2.5. S3 object operations As a developer, you can perform object operations with the Amazon S3 application programing interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for objects, along with the function's support status. Table 2.20. Object operations Get Object Supported Get Object Information Supported Put Object Supported Delete Object Supported Delete Multiple Objects Supported Get Object ACLs Supported Put Object ACLs Supported Copy Object Supported Post Object Supported Options Object Supported Initiate Multipart Upload Supported Add a Part to a Multipart Upload Supported List Parts of a Multipart Upload Supported Assemble Multipart Upload Supported Copy Multipart Upload Supported Abort Multipart Upload Supported Multi-Tenancy Supported 2.5.1. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 2.5.2. S3 get an object from a bucket Retrieves an object from a bucket: Syntax Add the versionId subresource to retrieve a particular version of the object: Syntax Table 2.21. Request Headers Name Description Valid Values Required range The range of the object to retrieve. Range: bytes=beginbyte-endbyte No if-modified-since Gets only if modified since the timestamp. Timestamp No if-unmodified-since Gets only if not modified since the timestamp. Timestamp No if-match Gets only if object ETag matches ETag. Entity Tag No if-none-match Gets only if object ETag matches ETag. Entity Tag No Table 2.22. Response Headers Name Description Content-Range Data range, will only be returned if the range header field was specified in the request x-amz-version-id Returns the version ID or null. 2.5.3. S3 get information on an object Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload. Retrieves the current version of the object: Syntax Add the versionId subresource to retrieve info for a particular version: Syntax Table 2.23. Request Headers Name Description Valid Values Required range The range of the object to retrieve. Range: bytes=beginbyte-endbyte No if-modified-since Gets only if modified since the timestamp. Timestamp No if-unmodified-since Gets only if not modified since the timestamp. Timestamp No if-match Gets only if object ETag matches ETag. Entity Tag No if-none-match Gets only if object ETag matches ETag. Entity Tag No Table 2.24. Response Headers Name Description x-amz-version-id Returns the version ID or null. 2.5.4. S3 add an object to a bucket Adds an object to a bucket. You must have write permissions on the bucket to perform this operation. Syntax Table 2.25. Request Headers Name Description Valid Values Required content-md5 A base64 encoded MD-5 hash of the message. A string. No defaults or constraints. No content-type A standard MIME type. Any MIME type. Default: binary/octet-stream No x-amz-meta-<... > User metadata. Stored with the object. A string up to 8kb. No defaults. No x-amz-acl A canned ACL. private , public-read , public-read-write , authenticated-read No Table 2.26. Response Headers Name Description x-amz-version-id Returns the version ID or null. 2.5.5. S3 delete an object Removes an object. Requires WRITE permission set on the containing bucket. Deletes an object. If object versioning is on, it creates a marker. Syntax To delete an object when versioning is on, you must specify the versionId subresource and the version of the object to delete. 2.5.6. S3 delete multiple objects This API call deletes multiple objects from a bucket. Syntax 2.5.7. S3 get an object's Access Control List (ACL) Returns the ACL for the current version of the object: Syntax Add the versionId subresource to retrieve the ACL for a particular version: Syntax Table 2.27. Response Headers Name Description x-amz-version-id Returns the version ID or null. Table 2.28. Response Entities Name Type Description AccessControlPolicy Container A container for the response. AccessControlList Container A container for the ACL information. Owner Container A container for the object owner's ID and DisplayName . ID String The object owner's ID. DisplayName String The object owner's display name. Grant Container A container for Grantee and Permission . Grantee Container A container for the DisplayName and ID of the user receiving a grant of permission. Permission String The permission given to the Grantee object. 2.5.8. S3 set an object's Access Control List (ACL) Sets an object ACL for the current version of the object. Syntax Table 2.29. Request Entities Name Type Description AccessControlPolicy Container A container for the response. AccessControlList Container A container for the ACL information. Owner Container A container for the object owner's ID and DisplayName . ID String The object owner's ID. DisplayName String The object owner's display name. Grant Container A container for Grantee and Permission . Grantee Container A container for the DisplayName and ID of the user receiving a grant of permission. Permission String The permission given to the Grantee object. 2.5.9. S3 copy an object To copy an object, use PUT and specify a destination bucket and the object name. Syntax Table 2.30. Request Headers Name Description Valid Values Required x-amz-copy-source The source bucket name + object name. BUCKET / OBJECT Yes x-amz-acl A canned ACL. private , public-read , public-read-write , authenticated-read No x-amz-copy-if-modified-since Copies only if modified since the timestamp. Timestamp No x-amz-copy-if-unmodified-since Copies only if unmodified since the timestamp. Timestamp No x-amz-copy-if-match Copies only if object ETag matches ETag. Entity Tag No x-amz-copy-if-none-match Copies only if object ETag doesn't match. Entity Tag No Table 2.31. Response Entities Name Type Description CopyObjectResult Container A container for the response elements. LastModified Date The last modified date of the source object. Etag String The ETag of the new object. Additional Resources <additional resource 1> <additional resource 2> 2.5.10. S3 add an object to a bucket using HTML forms Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation. Syntax 2.5.11. S3 determine options for a request A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers. Syntax 2.5.12. S3 initiate a multipart upload Initiates a multi-part upload process. Returns a UploadId , which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload. Syntax Table 2.32. Request Headers Name Description Valid Values Required content-md5 A base64 encoded MD-5 hash of the message. A string. No defaults or constraints. No content-type A standard MIME type. Any MIME type. Default: binary/octet-stream No x-amz-meta-<... > User metadata. Stored with the object. A string up to 8kb. No defaults. No x-amz-acl A canned ACL. private , public-read , public-read-write , authenticated-read No Table 2.33. Response Entities Name Type Description InitiatedMultipartUploadsResult Container A container for the results. Bucket String The bucket that will receive the object contents. Key String The key specified by the key request parameter (if any). UploadId String The ID specified by the upload-id request parameter identifying the multipart upload (if any). 2.5.13. S3 add a part to a multipart upload Adds a part to a multi-part upload. Specify the uploadId subresource and the upload ID to add a part to a multi-part upload: Syntax The following HTTP response might be returned: Table 2.34. HTTP Response HTTP Status Status Code Description 404 NoSuchUpload Specified upload-id does not match any initiated upload on this object 2.5.14. S3 list the parts of a multipart upload Specify the uploadId subresource and the upload ID to list the parts of a multi-part upload: Syntax Table 2.35. Response Entities Name Type Description InitiatedMultipartUploadsResult Container A container for the results. Bucket String The bucket that will receive the object contents. Key String The key specified by the key request parameter (if any). UploadId String The ID specified by the upload-id request parameter identifying the multipart upload (if any). Initiator Container Contains the ID and DisplayName of the user who initiated the upload. ID String The initiator's ID. DisplayName String The initiator's display name. Owner Container A container for the ID and DisplayName of the user who owns the uploaded object. StorageClass String The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY PartNumberMarker String The part marker to use in a subsequent request if IsTruncated is true . Precedes the list. NextPartNumberMarker String The part marker to use in a subsequent request if IsTruncated is true . The end of the list. MaxParts Integer The max parts allowed in the response as specified by the max-parts request parameter. IsTruncated Boolean If true , only a subset of the object's upload contents were returned. Part Container A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. PartNumber Integer The identification number of the part. ETag String The part's entity tag. Size Integer The size of the uploaded part. 2.5.15. S3 assemble the uploaded parts Assembles uploaded parts and creates a new object, thereby completing a multipart upload. Specify the uploadId subresource and the upload ID to complete a multi-part upload: Syntax Table 2.36. Request Entities Name Type Description Required CompleteMultipartUpload Container A container consisting of one or more parts. Yes Part Container A container for the PartNumber and ETag . Yes PartNumber Integer The identifier of the part. Yes ETag String The part's entity tag. Yes Table 2.37. Response Entities Name Type Description CompleteMultipartUploadResult Container A container for the response. Location URI The resource identifier (path) of the new object. Bucket String The name of the bucket that contains the new object. Key String The object's key. ETag String The entity tag of the new object. 2.5.16. S3 copy a multipart upload Uploads a part by copying data from an existing object as data source. Specify the uploadId subresource and the upload ID to perform a multi-part upload copy: Syntax Table 2.38. Request Headers Name Description Valid Values Required x-amz-copy-source The source bucket name and object name. BUCKET / OBJECT Yes x-amz-copy-source-range The range of bytes to copy from the source object. Range: bytes=first-last , where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first ten bytes of the source. No Table 2.39. Response Entities Name Type Description CopyPartResult Container A container for all response elements. ETag String Returns the ETag of the new part. LastModified String Returns the date the part was last modified. For more information about this feature, see the Amazon S3 site . 2.5.17. S3 abort a multipart upload Aborts a multipart upload. Specify the uploadId subresource and the upload ID to abort a multi-part upload: Syntax 2.5.18. S3 Hadoop interoperability For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway. Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3. 2.5.19. Additional Resources See the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for details on multi-tenancy. 2.6. Additional Resources See Appendix A for Amazon S3 common request headers. See Appendix B for Amazon S3 common response status codes. See Appendix C for unsupported header fields.
[ "HTTP/1.1 PUT /buckets/bucket/object.mpeg Host: cname.domain.com Date: Mon, 2 Jan 2012 00:01:01 +0000 Content-Encoding: mpeg Content-Length: 9999999 Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --reload", "yum install dnsmasq echo \"address=/. FQDN_OF_GATEWAY_NODE / IP_OF_GATEWAY_NODE \" | tee --append /etc/dnsmasq.conf systemctl start dnsmasq systemctl enable dnsmasq", "systemctl stop NetworkManager systemctl disable NetworkManager", "echo \"DNS1= IP_OF_GATEWAY_NODE \" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 echo \" IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE \" | tee --append /etc/hosts systemctl restart network systemctl enable network systemctl restart dnsmasq", "[user@rgw ~]USD ping mybucket. FQDN_OF_GATEWAY_NODE", "yum install ruby", "gem install aws-s3", "[user@dev ~]USD mkdir ruby_aws_s3 [user@dev ~]USD cd ruby_aws_s3", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => ' FQDN_OF_GATEWAY_NODE ', :port => '8080', :access_key_id => ' MY_ACCESS_KEY ', :secret_access_key => ' MY_SECRET_KEY ' )", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'testclient.englab.pnq.redhat.com', :port => '8080', :access_key_id => '98J4R9P22P5CDL65HKP8', :secret_access_key => '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Service.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket1 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.store( 'hello.txt', 'Hello World!', 'my-new-bucket1', :content_type => 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' new_bucket = AWS::S3::Bucket.find('my-new-bucket1') new_bucket.each do |object| puts \"{object.key}\\t{object.about['content-length']}\\t{object.about['last-modified']}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install ruby", "gem install aws-sdk", "[user@dev ~]USD mkdir ruby_aws_sdk [user@dev ~]USD cd ruby_aws_sdk", "[user@ruby_aws_sdk]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http:// FQDN_OF_GATEWAY_NODE :8080', access_key_id: ' MY_ACCESS_KEY ', secret_access_key: ' MY_SECRET_KEY ', force_path_style: true, region: 'us-east-1' )", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://testclient.englab.pnq.redhat.com:8080', access_key_id: '98J4R9P22P5CDL65HKP8', secret_access_key: '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049', force_path_style: true, region: 'us-east-1' )", "[user@ruby_aws_sdk]USD chmod +x conn.rb", "[user@ruby_aws_sdk]USD ./conn.rb | echo USD?", "[user@ruby_aws_sdk]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.create_bucket(bucket: 'my-new-bucket2')", "[user@ruby_aws_sdk]USD chmod +x create_bucket.rb", "[user@ruby_aws_sdk]USD ./create_bucket.rb", "[user@ruby_aws_sdk]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_buckets.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@ruby_aws_sdk]USD chmod +x list_owned_buckets.rb", "[user@ruby_aws_sdk]USD ./list_owned_buckets.rb", "my-new-bucket2 2022-04-21 10:33:19 UTC", "[user@ruby_aws_sdk]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.put_object( key: 'hello.txt', body: 'Hello World!', bucket: 'my-new-bucket2', content_type: 'text/plain' )", "[user@ruby_aws_sdk]USD chmod +x create_object.rb", "[user@ruby_aws_sdk]USD ./create_object.rb", "[user@ruby_aws_sdk]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_objects(bucket: 'my-new-bucket2').contents.each do |object| puts \"{object.key}\\t{object.size}\" end", "[user@ruby_aws_sdk]USD chmod +x list_bucket_content.rb", "[user@ruby_aws_sdk]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT", "[user@ruby_aws_sdk]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@ruby_aws_sdk]USD chmod +x del_empty_bucket.rb", "[user@ruby_aws_sdk]USD ./del_empty_bucket.rb | echo USD?", "[user@ruby_aws_sdk]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new Aws::S3::Bucket.new('my-new-bucket2', client: s3_client).clear! s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@ruby_aws_sdk]USD chmod +x del_non_empty_bucket.rb", "[user@ruby_aws_sdk]USD ./del_non_empty_bucket.rb | echo USD?", "[user@ruby_aws_sdk]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_object(key: 'hello.txt', bucket: 'my-new-bucket2')", "[user@ruby_aws_sdk]USD chmod +x delete_object.rb", "[user@ruby_aws_sdk]USD ./delete_object.rb", "yum install php", "[user@dev ~]USD mkdir php_s3 [user@dev ~]USD cd php_s3", "[user@php_s3]USD cp -r ~/Downloads/aws/ ~/php_s3/", "[user@php_s3]USD vim conn.php", "<?php define('AWS_KEY', ' MY_ACCESS_KEY '); define('AWS_SECRET_KEY', ' MY_SECRET_KEY '); define('HOST', ' FQDN_OF_GATEWAY_NODE '); define('PORT', '8080'); // require the AWS SDK for php library require '/ PATH_TO_AWS /aws-autoloader.php'; use Aws\\S3\\S3Client; // Establish connection with host using S3 Client client = S3Client::factory(array( 'base_url' => HOST , 'port' => PORT , 'key' => AWS_KEY , 'secret' => AWS_SECRET_KEY )); ?>", "[user@php_s3]USD php -f conn.php | echo USD?", "[user@php_s3]USD vim create_bucket.php", "<?php include 'conn.php'; client->createBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@php_s3]USD php -f create_bucket.php", "[user@php_s3]USD vim list_owned_buckets.php", "<?php include 'conn.php'; blist = client->listBuckets(); echo \"Buckets belonging to \" . blist['Owner']['ID'] . \":\\n\"; foreach (blist['Buckets'] as b) { echo \"{b['Name']}\\t{b['CreationDate']}\\n\"; } ?>", "[user@php_s3]USD php -f list_owned_buckets.php", "my-new-bucket3 2022-04-21 10:33:19 UTC", "[user@php_s3]USD echo \"Hello World!\" > hello.txt", "[user@php_s3]USD vim create_object.php", "<?php include 'conn.php'; key = 'hello.txt'; source_file = './hello.txt'; acl = 'private'; bucket = 'my-new-bucket3'; client->upload(bucket, key, fopen(source_file, 'r'), acl); ?>", "[user@php_s3]USD php -f create_object.php", "[user@php_s3]USD vim list_bucket_content.php", "<?php include 'conn.php'; o_iter = client->getIterator('ListObjects', array( 'Bucket' => 'my-new-bucket3' )); foreach (o_iter as o) { echo \"{o['Key']}\\t{o['Size']}\\t{o['LastModified']}\\n\"; } ?>", "[user@php_s3]USD php -f list_bucket_content.php", "hello.txt 12 Fri, 22 Apr 2022 15:54:52 GMT", "[user@php_s3]USD vim del_empty_bucket.php", "<?php include 'conn.php'; client->deleteBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@php_s3]USD php -f del_empty_bucket.php | echo USD?", "[user@php_s3]USD vim delete_object.php", "<?php include 'conn.php'; client->deleteObject(array( 'Bucket' => 'my-new-bucket3', 'Key' => 'hello.txt', )); ?>", "[user@php_s3]USD php -f delete_object.php", "[user@dev]USD pip3 install --user awscli", "aws configure --profile= MY_PROFILE_NAME AWS Access Key ID [None]: MY_ACCESS_KEY AWS Secret Access Key [None]: MY_SECRET_KEY Default region name [None]: Default output format [None]:", "[user@dev]USD aws configure --profile=ceph AWS Access Key ID [None]: 12345 AWS Secret Access Key [None]: 67890 Default region name [None]: Default output format [None]:", "alias aws=\"aws --endpoint-url=http:// FQDN_OF_GATEWAY_NODE :8080\"", "[user@dev]USD alias aws=\"aws --endpoint-url=http://testclient.englab.pnq.redhat.com:8080\"", "aws --profile= MY_PROFILE_NAME s3api create-bucket --bucket BUCKET_NAME", "[user@dev]USD aws --profile=ceph s3api create-bucket --bucket mybucket", "aws --profile= MY_PROFILE_NAME s3api list-buckets", "[user@dev]USD aws --profile=ceph s3api list-buckets { \"Buckets\": [ { \"Name\": \"mybucket\", \"CreationDate\": \"2021-08-31T16:46:15.257Z\" } ], \"Owner\": { \"DisplayName\": \"User\", \"ID\": \"user\" } }", "aws --profile= MY_PROFILE_NAME s3api put-bucket-versioning --bucket BUCKET_NAME --versioning-configuration '{\"Status\":\"Enabled\",\"MFADelete\":\"Enabled\"}' --mfa ' TOTP_SERIAL TOTP_PIN '", "[user@dev]USD aws --profile=ceph s3api put-bucket-versioning --bucket mybucket --versioning-configuration '{\"Status\":\"Enabled\",\"MFADelete\":\"Enabled\"}' --mfa 'MFAtest 232009'", "aws --profile= MY_PROFILE_NAME s3api get-bucket-versioning --bucket BUCKET_NAME", "[user@dev]USD aws --profile=ceph s3api get-bucket-versioning --bucket mybucket { \"Status\": \"Enabled\", \"MFADelete\": \"Enabled\" }", "aws --profile= MY_PROFILE_NAME s3api put-object --bucket BUCKET_NAME --key OBJECT_KEY --body LOCAL_FILE", "[user@dev]USD aws --profile=ceph s3api put-object --bucket mybucket --key example --body testfile { \"ETag\": \"\\\"5679b828547a4b44cfb24a23fd9bb9d5\\\"\", \"VersionId\": \"3VyyYPTEuIofdvMPWbr1znlOu7lJE3r\" }", "aws --profile= MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJEC_KEY ]", "[user@dev]USD aws --profile=ceph s3api list-object-versions --bucket mybucket --key example { \"IsTruncated\": false, \"KeyMarker\": \"example\", \"VersionIdMarker\": \"\", \"Versions\": [ { \"ETag\": \"\\\"5679b828547a4b44cfb24a23fd9bb9d5\\\"\", \"Size\": 196, \"StorageClass\": \"STANDARD\", \"Key\": \"example\", \"VersionId\": \"3VyyYPTEuIofdvMPWbr1znlOu7lJE3r\", \"IsLatest\": true, \"LastModified\": \"2021-08-31T17:48:45.484Z\", \"Owner\": { \"DisplayName\": \"User\", \"ID\": \"user\" } } ], \"Name\": \"mybucket\", \"Prefix\": \"\", \"MaxKeys\": 1000, \"EncodingType\": \"url\" }", "aws --profile= MY_PROFILE_NAME s3api delete-object --bucket BUCKET_NAME --key OBJECT_KEY --version-id VERSION_ID --mfa ' TOTP_SERIAL TOTP_PIN '", "[user@dev]USD aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r --mfa 'MFAtest 420797' { \"VersionId\": \"3VyyYPTEuIofdvMPWbr1znlOu7lJE3r\" }", "[user@dev]USD aws --profile=ceph s3api delete-object --bucket mybucket --key example --version-id 3VyyYPTEuIofdvMPWbr1znlOu7lJE3r An error occurred (AccessDenied) when calling the DeleteObject operation: Unknown", "aws --profile= MY_PROFILE_NAME s3api list-object-versions --bucket BUCKET_NAME --key OBJECT_KEY", "[user@dev]USD aws --profile=ceph s3api list-object-versions --bucket mybucket --key example { \"IsTruncated\": false, \"KeyMarker\": \"example\", \"VersionIdMarker\": \"\", \"Name\": \"mybucket\", \"Prefix\": \"\", \"MaxKeys\": 1000, \"EncodingType\": \"url\" }", "dnf install oathtool", "dnf install qrencode", "[user@dev]USD SEED=USD(head -10 /dev/urandom | sha512sum | cut -b 1-30)", "[user@dev]USD echo USDSEED BA6GLJBJIKC3D7W7YFYXXAQ7", "oathtool -v -d6 USDSEED", "[user@dev]USD oathtool -v -d6 USDSEED Hex secret: 083c65a4294285b1fedfc1717b821f Base32 secret: BA6GLJBJIKC3D7W7YFYXXAQ7 Digits: 6 Window size: 0 Start counter: 0x0 (0) 823182", "qrencode -o /tmp/user.png 'otpauth://totp/ TOTP_SERIAL ?secret=_BASE32_SECRET'", "[user@dev]USD qrencode -o /tmp/user.png 'otpauth://totp/MFAtest?secret=BA6GLJBJIKC3D7W7YFYXXAQ7'", "rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true", "[user@admin ceph-ansible]USD ansible-playbook site.yml --limit rgws", "[user@admin ceph-ansible]USD ansible-playbook site-docker.yml --limit rgws", "radosgw-admin --uid USER_NAME --display-name \" DISPLAY_NAME \" --access_key USER_NAME --secret SECRET user create", "[user@rgw ~]USD radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create", "radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"oidc-provider=*\"", "[user@rgw ~]USD radosgw-admin caps add --uid=\"TESTER\" --caps=\"oidc-provider=*\"", "\"{\\\"Version\\\":\\\"2020-01-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"Federated\\\":[\\\"arn:aws:iam:::oidc-provider/ IDP_URL \\\"]},\\\"Action\\\":[\\\"sts:AssumeRoleWithWebIdentity\\\"],\\\"Condition\\\":{\\\"StringEquals\\\":{\\\" IDP_URL :app_id\\\":\\\" AUD_FIELD \\\"\\}\\}\\}\\]\\}\"", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL :8000/ CONTEXT /realms/ REALM /.well-known/openid-configuration\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration\" | jq .", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL / CONTEXT /realms/ REALM /protocol/openid-connect/certs\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs\" | jq .", "openssl x509 -in CERT_FILE -fingerprint -noout", "[user@client ~]USD openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87", "rgw_sts_key = STS_KEY rgw_s3_auth_use_sts = true", "[user@admin ceph-ansible]USD ansible-playbook site.yml --limit rgws", "[user@admin ceph-ansible]USD ansible-playbook site-docker.yml --limit rgws", "[user@osp ~]USD openstack ec2 credentials create +------------+--------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------+ | access | b924dfc87d454d15896691182fdeb0ef | | links | {u'self': u'http://192.168.0.15/identity/v3/users/ | | | 40a7140e424f493d8165abc652dc731c/credentials/ | | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} | | project_id | c703801dccaf4a0aaa39bec8c481e25a | | secret | 6a2142613c504c42a94ba2b82147dc28 | | trust_id | None | | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+", "import boto3 access_key = b924dfc87d454d15896691182fdeb0ef secret_key = 6a2142613c504c42a94ba2b82147dc28 client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.get_session_token( DurationSeconds=43200 )", "s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=https://www.example.com/s3, region_name='') bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket') response = s3client.list_buckets() for bucket in response[\"Buckets\"]: print \"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'], )", "radosgw-admin caps add --uid=\" USER \" --caps=\"roles=*\"", "[user@client]USD radosgw-admin caps add --uid=\"gwadmin\" --caps=\"roles=*\"", "radosgw-admin role create --role-name= ROLE_NAME --path= PATH --assume-role-policy-doc= TRUST_POLICY_DOC", "[user@client]USD radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\}", "radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOC", "[user@client]USD radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}", "[user@client]USD radosgw-admin user info --uid=gwuser | grep -A1 access_key", "import boto3 access_key = 11BS02LGFB6AL6H1ADMW secret_key = vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.assume_role( RoleArn='arn:aws:iam:::role/application_abc/component_xyz/S3Access', RoleSessionName='Bob', DurationSeconds=3600 )", "class SigV4Auth(BaseSigner): \"\"\" Sign a request with Signature V4. \"\"\" REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4", "def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request)", "{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[\"sts:AssumeRoleWithWebIdentity\",\"sts:TagSession\"], \"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart\"]}, \"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/quickstart:sub\":\"test\"}} }] }", "client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'] }]})", "Get / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <TopicConfiguration> <Id></Id> <Topic></Topic> <Event></Event> <Filter> <S3Key> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Key> <S3Metadata> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Metadata> <S3Tags> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Tags> </Filter> </TopicConfiguration> </NotificationConfiguration>", "DELETE / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1", "DELETE /testbucket?notification=testnotificationID HTTP/1.1", "GET /mybucket HTTP/1.1 Host: cname.domain.com", "GET / HTTP/1.1 Host: mybucket.cname.domain.com", "GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?max-keys=25 HTTP/1.1 Host: cname.domain.com", "PUT / BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<LifecycleConfiguration> <Rule> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>10</Days> </Expiration> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> <Rule> <Status>Enabled</Status> <Filter> <Prefix>mypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Tag> <Key>key</Key> <Value>value</Value> </Tag> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <And> <Prefix>key-prefix</Prefix> <Tag> <Key>key1</Key> <Value>value1</Value> </Tag> <Tag> <Key>key2</Key> <Value>value2</Value> </Tag> </And> </Filter> </Rule> </LifecycleConfiguration>", "GET / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET <LifecycleConfiguration> <Rule> <Expiration> <Days>10</Days> </Expiration> </Rule> <Rule> </Rule> </LifecycleConfiguration>", "DELETE / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?versioning HTTP/1.1", "GET / BUCKET ?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?acl HTTP/1.1", "GET / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "HEAD / BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?uploads HTTP/1.1", "cat > examplepol { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::usfolks:user/fred\"]}, \"Action\": \"s3:PutObjectAcl\", \"Resource\": [ \"arn:aws:s3:::happybucket/*\" ] }] } s3cmd setpolicy examplepol s3://happybucket s3cmd delpolicy s3://happybucket", "GET / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com", "https://rgw.domain.com/tenant:bucket", "from boto.s3.connection import S3Connection, OrdinaryCallingFormat c = S3Connection( aws_access_key_id=\"TESTER\", aws_secret_access_key=\"test123\", host=\"rgw.domain.com\", calling_format = OrdinaryCallingFormat() ) bucket = c.get_bucket(\"tenant:bucket\")", "GET / BUCKET / OBJECT HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "HEAD / BUCKET / OBJECT HTTP/1.1", "HEAD / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "PUT / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "POST / BUCKET / OBJECT ?delete HTTP/1.1", "GET / BUCKET / OBJECT ?acl HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID &acl HTTP/1.1", "PUT / BUCKET / OBJECT ?acl", "PUT / DEST_BUCKET / DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET / SOURCE_OBJECT", "POST / BUCKET / OBJECT HTTP/1.1", "OPTIONS / OBJECT HTTP/1.1", "POST / BUCKET / OBJECT ?uploads", "PUT / BUCKET / OBJECT ?partNumber=&uploadId= UPLOAD_ID HTTP/1.1", "GET / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "POST / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "PUT / BUCKET / OBJECT ?partNumber=PartNumber&uploadId= UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", ".Additional Resources", "DELETE / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/ceph-object-gateway-and-the-s3-api
Chapter 18. Installing and managing Windows virtual machines
Chapter 18. Installing and managing Windows virtual machines To use Microsoft Windows as the guest operating system in your virtual machines (VMs) on a RHEL 8 host, Red Hat recommends taking extra steps to ensure these VMs run correctly. For this purpose, the following sections provide information about installing and optimizing Windows VMs on the host, as well as installing and configuring drivers in these VMs. 18.1. Installing Windows virtual machines You can create a fully-virtualized Windows machine on a RHEL 8 host, launch the graphical Windows installer inside the virtual machine (VM), and optimize the installed Windows guest operating system (OS). To create the VM and to install the Windows guest OS, use the virt-install command or the RHEL 8 web console. Prerequisites A Windows OS installation source, which can be one of the following, and be available locally or on a network: An ISO image of an installation medium A disk image of an existing VM installation A storage medium with the KVM virtio drivers. To create this medium, see Preparing virtio driver installation media on a host machine . If you are installing Windows 11, the edk2-ovmf , swtpm and libtpms packages must be installed on the host. Procedure Create the VM. For instructions, see Creating virtual machines , but keep in mind the following specifics. If using the virt-install utility to create the VM, add the following options to the command: The storage medium with the KVM virtio drivers. For example: The Windows version you will install. For example, for Windows 10 and 11: For a list of available Windows versions and the appropriate option, use the following command: If you are installing Windows 11, enable Unified Extensible Firmware Interface (UEFI) and virtual Trusted Platform Module (vTPM): If using the web console to create the VM, specify your version of Windows in the Operating system field of the Create new virtual machine window. If you are installing Windows versions prior to Windows 11 and Windows Server 2022, start the installation by clicking Create and run . If you are installing Windows 11, or you want to use additional Windows Server 2022 features, confirm by clicking Create and edit and enable UEFI and vTPM using the CLI: Open the VM's XML configuration: Add the firmware='efi' option to the os element: Add the tpm device inside the devices element: Start the Windows installation by clicking Install in the Virtual machines table. Install the Windows OS in the VM. For information about how to install a Windows operating system, refer to the relevant Microsoft installation documentation. If using the web console to create the VM, attach the storage medium with virtio drivers to the VM by using the Disks interface. For instructions, see Attaching existing disks to virtual machines by using the web console . Configure KVM virtio drivers in the Windows guest OS. For details, see Installing KVM paravirtualized drivers for Windows virtual machines . Additional resources Optimizing Windows virtual machines Enabling standard hardware security on Windows virtual machines Sample virtual machine XML configuration 18.2. Optimizing Windows virtual machines When using Microsoft Windows as a guest operating system in a virtual machine (VM) hosted in RHEL 8, the performance of the guest may be negatively impacted. Therefore, Red Hat recommends optimizing your Windows VMs by doing any combination of the following: Using paravirtualized drivers. For more information, see Installing KVM paravirtualized drivers for Windows virtual machines . Enabling Hyper-V enlightenments. For more information, see Enabling Hyper-V enlightenments . Configuring NetKVM driver parameters. For more information, see Configuring NetKVM driver parameters . Optimizing or disabling Windows background processes. For more information, see Optimizing background processes on Windows virtual machines . 18.2.1. Installing KVM paravirtualized drivers for Windows virtual machines The primary method of improving the performance of your Windows virtual machines (VMs) is to install KVM paravirtualized ( virtio ) drivers for Windows on the guest operating system. Note The virtio-win drivers are certified (WHQL) against the latest releases of Windows 10 and 11, available at the time of the respective virtio-win release. However, virtio-win drivers are generally tested and expected to function correctly on builds of Windows 10 and 11 as well. To install the drivers on a Windows VM, perform the following actions: Prepare the install media on the host machine. For more information, see Preparing virtio driver installation media on a host machine . Attach the install media to an existing Windows VM, or attach it when creating a new Windows VM. For more information, see Installing Windows virtual machines on RHEL . Install the virtio drivers on the Windows guest operating system. For more information, see Installing virtio drivers on a Windows guest . Install the QEMU Guest Agent on the Windows guest operating system. For more information, see Installing QEMU Guest Agent on a Windows guest . 18.2.1.1. How Windows virtio drivers work Paravirtualized drivers enhance the performance of virtual machines (VMs) by decreasing I/O latency and increasing throughput to almost bare-metal levels. Red Hat recommends that you use paravirtualized drivers for VMs that run I/O-heavy tasks and applications. virtio drivers are KVM's paravirtualized device drivers, available for Windows VMs running on KVM hosts. These drivers are provided by the virtio-win package, which includes drivers for: Block (storage) devices Network interface controllers Video controllers Memory ballooning device Paravirtual serial port device Entropy source device Paravirtual panic device Input devices, such as mice, keyboards, or tablets A small set of emulated devices Note For additional information about emulated, virtio , and assigned devices, refer to Managing virtual devices . By using KVM virtio drivers, the following Microsoft Windows versions are expected to run similarly to physical systems: Windows Server versions: See Certified guest operating systems for Red Hat Enterprise Linux with KVM in the Red Hat Knowledgebase. Windows Desktop (non-server) versions: Windows 10 (32-bit and 64-bit versions) 18.2.1.2. Preparing virtio driver installation media on a host machine To install or update KVM virtio drivers on a Windows virtual machine (VM), you must first prepare the virtio driver installation media on the host machine. To do so, attach the .iso file, provided by the virtio-win package, as a storage device to the Windows VM. Prerequisites Ensure that virtualization is enabled in your RHEL 8 host system. For more information, see Enabling virtualization . Ensure that you have root access privileges to the VM. Procedure Refresh your subscription data: Get the latest version of the virtio-win package. If virtio-win is not installed: If virtio-win is installed: If the installation succeeds, the virtio-win driver files are available in the /usr/share/virtio-win/ directory. These include ISO files and a drivers directory with the driver files in directories, one for each architecture and supported Windows version. Attach the virtio-win.iso file as a storage device to the Windows VM. When creating a new Windows VM , attach the file by using the virt-install command options. When installing the drivers on an existing Windows VM, attach the file as a CD-ROM by using the virt-xml utility: Additional resources Installing the virtio driver on the Windows guest operating system . 18.2.1.3. Installing virtio drivers on a Windows guest To install KVM virtio drivers on a Windows guest operating system, you must add a storage device that contains the drivers (either when creating the virtual machine (VM) or afterwards) and install the drivers in the Windows guest operating system. This procedure provides instructions to install the drivers by using the graphical interface. You can also use the Microsoft Windows Installer (MSI) command-line interface. Prerequisites An installation medium with the KVM virtio drivers must be attached to the VM. For instructions on preparing the medium, see Preparing virtio driver installation media on a host machine . Procedure In the Windows guest operating system, open the File Explorer application. Click This PC . In the Devices and drives pane, open the virtio-win medium. Based on the operating system installed on the VM, run one of the installers: If using a 32-bit operating system, run the virtio-win-gt-x86.msi installer. If using a 64-bit operating system, run the virtio-win-gt-x64.msi installer. In the Virtio-win-driver-installer setup wizard that opens, follow the displayed instructions until you reach the Custom Setup step. In the Custom Setup window, select the device drivers you want to install. The recommended driver set is selected automatically, and the descriptions of the drivers are displayed on the right of the list. Click , then click Install . After the installation completes, click Finish . Reboot the VM to complete the driver installation. Verification On your Windows VM, navigate to the Device Manager : Click Start Search for Device Manager Ensure that the devices are using the correct drivers: Click a device to open the Driver Properties window. Navigate to the Driver tab. Click Driver Details . steps If you installed the NetKVM driver, you might also need to configure the Windows guest's networking parameters. For more information, see Configuring NetKVM driver parameters . 18.2.1.4. Updating virtio drivers on a Windows guest To update KVM virtio drivers on a Windows guest operating system (OS), you can use the Windows Update service, if the Windows OS version supports it. If it does not, reinstall the drivers from virtio driver installation media attached to the Windows virtual machine (VM). Prerequisites A Windows guest OS with virtio drivers installed . If not using Windows Update , an installation medium with up-to-date KVM virtio drivers must be attached to the Windows VM. For instructions on preparing the medium, see Preparing virtio driver installation media on a host machine . Procedure 1: Updating the drivers by using Windows Update On Windows 10, Windows Server 2016 and later operating systems, check if the driver updates are available by using the Windows Update graphical interface: Start the Windows VM and log in to its guest OS. Navigate to the Optional updates page: Settings Windows Update Advanced options Optional updates Install all updates from Red Hat, Inc. Procedure 2: Updating the drivers by reinstalling them On operating systems prior to Windows 10 and Windows Server 2016, or if the OS does not have access to Windows Update , reinstall the drivers. This restores the Windows guest OS network configuration to default (DHCP). If you want to preserve a customized network configuration, you also need to create a backup and restore it by using the netsh utility: Start the Windows VM and log in to its guest OS. Open the Windows Command Prompt: Use the Super + R keyboard shortcut. In the window that appears, type cmd and press Ctrl + Shift + Enter to run as administrator. Back up the OS network configuration by using the Windows Command Prompt: Reinstall KVM virtio drivers from the attached installation media. Do one of the following: Reinstall the drivers by using the Windows Command Prompt, where X is the installation media drive letter. The following commands install all virtio drivers. If using a 64-bit vCPU: C:\WINDOWS\system32\msiexec.exe /i X :\virtio-win-gt-x64.msi /passive /norestart If using a 32-bit vCPU: Reinstall the drivers using the graphical interface without rebooting the VM. Restore the OS network configuration using the Windows Command Prompt: Reboot the VM to complete the driver installation. Additional resources Microsoft documentation on Windows Update 18.2.1.5. Enabling QEMU Guest Agent on Windows guests To allow a RHEL host to perform a certain subset of operations on a Windows virtual machine (VM), you must enable the QEMU Guest Agent (GA). To do so, add a storage device that contains the QEMU Guest Agent installer to an existing VM or when creating a new VM, and install the drivers on the Windows guest operating system. To install the Guest Agent (GA) by using the graphical interface, see the procedure below. To install the GA on the command line, use the Microsoft Windows Installer (MSI) . Prerequisites An installation medium with the Guest Agent is attached to the VM. For instructions on preparing the medium, see Preparing virtio driver installation media on a host machine . Procedure In the Windows guest operating system, open the File Explorer application. Click This PC . In the Devices and drives pane, open the virtio-win medium. Open the guest-agent folder. Based on the operating system installed on the VM, run one of the following installers: If using a 32-bit operating system, run the qemu-ga-i386.msi installer. If using a 64-bit operating system, run the qemu-ga-x86_64.msi installer. Optional: If you want to use the para-virtualized serial driver ( virtio-serial ) as the communication interface between the host and the Windows guest, verify that the virtio-serial driver is installed on the Windows guest. For more information about installing virtio drivers, see: Installing virtio drivers on a Windows guest . Verification On your Windows VM, navigate to the Services window. Computer Management > Services Ensure that the status of the QEMU Guest Agent service is Running . Additional resources Virtualization features that require QEMU Guest Agent 18.2.2. Enabling Hyper-V enlightenments Hyper-V enlightenments provide a method for KVM to emulate the Microsoft Hyper-V hypervisor. This improves the performance of Windows virtual machines. The following sections provide information about the supported Hyper-V enlightenments and how to enable them. 18.2.2.1. Enabling Hyper-V enlightenments on a Windows virtual machine Hyper-V enlightenments provide better performance in a Windows virtual machine (VM) running in a RHEL 8 host. For instructions on how to enable them, see the following. Procedure Use the virsh edit command to open the XML configuration of the VM. For example: Add the following <hyperv> sub-section to the <features> section of the XML: <features> [...] <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> [...] </features> If the XML already contains a <hyperv> sub-section, modify it as shown above. Change the clock section of the configuration as follows: <clock offset='localtime'> ... <timer name='hypervclock' present='yes'/> </clock> Save and exit the XML configuration. If the VM is running, restart it. Verification Use the virsh dumpxml command to display the XML configuration of the running VM. If it includes the following segments, the Hyper-V enlightenments are enabled on the VM. <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> <clock offset='localtime'> ... <timer name='hypervclock' present='yes'/> </clock> 18.2.2.2. Configurable Hyper-V enlightenments You can configure certain Hyper-V features to optimize Windows VMs. The following table provides information about these configurable Hyper-V features and their values. Table 18.1. Configurable Hyper-V features Enlightenment Description Values evmcs Implements paravirtualized protocol between L0 (KVM) and L1 (Hyper-V) hypervisors, which enables faster L2 exits to the hypervisor. Note This feature is exclusive to Intel processors. on, off frequencies Enables Hyper-V frequency Machine Specific Registers (MSRs). on, off ipi Enables paravirtualized inter processor interrupts (IPI) support. on, off reenlightenment Notifies when there is a time stamp counter (TSC) frequency change which only occurs during migration. It also allows the guest to keep using the old frequency until it is ready to switch to the new one. on, off relaxed Disables a Windows sanity check that commonly results in a BSOD when the VM is running on a heavily loaded host. This is similar to the Linux kernel option no_timer_check, which is automatically enabled when Linux is running on KVM. on, off runtime Sets processor time spent on running the guest code, and on behalf of the guest code. on, off spinlocks Used by a VM's operating system to notify Hyper-V that the calling virtual processor is attempting to acquire a resource that is potentially held by another virtual processor within the same partition. Used by Hyper-V to indicate to the virtual machine's operating system the number of times a spinlock acquisition should be attempted before indicating an excessive spin situation to Hyper-V. on, off stimer Enables synthetic timers for virtual processors. Note that certain Windows versions revert to using HPET (or even RTC when HPET is unavailable) when this enlightenment is not provided, which can lead to significant CPU consumption, even when the virtual CPU is idle. on, off stimer-direct Enables synthetic timers when an expiration event is delivered via a normal interrupt. on, off. synic Together with stimer, activates the synthetic timer. Windows 8 uses this feature in periodic mode. on, off time Enables the following Hyper-V-specific clock sources available to the VM, MSR-based 82 Hyper-V clock source (HV_X64_MSR_TIME_REF_COUNT, 0x40000020) Reference TSC 83 page which is enabled via MSR (HV_X64_MSR_REFERENCE_TSC, 0x40000021) on, off tlbflush Flushes the TLB of the virtual processors. on, off vapic Enables virtual APIC, which provides accelerated MSR access to the high-usage, memory-mapped Advanced Programmable Interrupt Controller (APIC) registers. on, off vpindex Enables virtual processor index. on, off 18.2.3. Configuring NetKVM driver parameters After the NetKVM driver is installed, you can configure it to better suit your environment. The parameters listed in the following procedure can be configured by using the Windows Device Manager ( devmgmt.msc ). Important Modifying the driver's parameters causes Windows to reload that driver. This interrupts existing network activity. Prerequisites The NetKVM driver is installed on the virtual machine. For more information, see Installing KVM paravirtualized drivers for Windows virtual machines . Procedure Open Windows Device Manager. For information about opening Device Manager, refer to the Windows documentation. Locate the Red Hat VirtIO Ethernet Adapter . In the Device Manager window, click + to Network adapters. Under the list of network adapters, double-click Red Hat VirtIO Ethernet Adapter . The Properties window for the device opens. View the device parameters. In the Properties window, click the Advanced tab. Modify the device parameters. Click the parameter you want to modify. Options for that parameter are displayed. Modify the options as needed. For information about the NetKVM parameter options, refer to NetKVM driver parameters . Click OK to save the changes. 18.2.4. NetKVM driver parameters The following table provides information about the configurable NetKVM driver logging parameters. Table 18.2. Logging parameters Parameter Description 2 Logging.Enable A Boolean value that determines whether logging is enabled. The default value is Enabled. Logging.Level An integer that defines the logging level. As the integer increases, so does the verbosity of the log. The default value is 0 (errors only). 1-2 adds configuration messages. 3-4 adds packet flow information. 5-6 adds interrupt and DPC level trace information. Note High logging levels will slow down your virtual machine. The following table provides information about the configurable NetKVM driver initial parameters. Table 18.3. Initial parameters Parameter Description Assign MAC A string that defines the locally-administered MAC address for the paravirtualized NIC. This is not set by default. Init.Do802.1PQ A Boolean value that enables Priority/VLAN tag population and removal support. The default value is Enabled. Init.MaxTxBuffers An integer that represents the number of TX ring descriptors that will be allocated. The value is limited by the size of Tx queue of QEMU. The default value is 1024. Valid values are: 16, 32, 64, 128, 256, 512, and 1024. Init.MaxRxBuffers An integer that represents the number of RX ring descriptors that will be allocated. The value is limited by the size of Tx queue of QEMU. The default value is 1024. Valid values are: 16, 32, 64, 128, 256, 512, 1024, 2048, and 4096. Offload.Tx.Checksum Specifies the TX checksum offloading capability. In Red Hat Enterprise Linux 8, the valid values for this parameter are: All (the default) which enables IP, TCP, and UDP checksum offloading for both IPv4 and IPv6 TCP/UDP(v4,v6) which enables TCP and UDP checksum offloading for both IPv4 and IPv6 TCP/UDP(v4) which enables TCP and UDP checksum offloading for IPv4 only TCP(v4) which enables only TCP checksum offloading for IPv4 only Offload.Rx.Checksum Specifies the RX checksum offloading capability. In Red Hat Enterprise Linux 8, the valid values for this parameter are: All (the default) which enables IP, TCP, and UDP checksum offloading for both IPv4 and IPv6 TCP/UDP(v4,v6) which enables TCP and UDP checksum offloading for both IPv4 and IPv6 TCP/UDP(v4) which enables TCP and UDP checksum offloading for IPv4 only TCP(v4) which enables only TCP checksum offloading for IPv4 only Offload.Tx.LSO Specifies the TX large segments offloading (LSO) capability. In Red Hat Enterprise Linux 8, the valid values for this parameter are: Maximal (the default) which enables LSO offloading for both TCPv4 and TCPv6 IPv4 which enables LSO offloading for TCPv4 only Disable which disables LSO offloading MinRxBufferPercent Specifies minimal amount of available buffers in RX queue in percent of total amount of RX buffers. If the actual number of available buffers is lower than that value, the NetKVM driver indicates low resources condition to the operating system (requesting it to return the RX buffers as soon as possible) Minimum value (default) - 0 , meaning the driver never indicates low resources condition. Maximum value - 100 , meaning the driver indicates low resources condition all the time. Additional resources INF enumeration keywords INF keywords that can be edited 18.2.5. Optimizing background processes on Windows virtual machines To optimize the performance of a virtual machine (VM) running a Windows OS, you can configure or disable a variety of Windows processes. Warning Certain processes might not work as expected if you change their configuration. Procedure You can optimize your Windows VMs by performing any combination of the following: Remove unused devices, such as USBs or CD-ROMs, and disable the ports. Disable background services, such as SuperFetch and Windows Search. For more information about stopping services, see Disabling system services or Stop-Service . Disable useplatformclock . To do so, run the following command, Review and disable unnecessary scheduled tasks, such as scheduled disk defragmentation. For more information about how to do so, see Disable Scheduled Tasks . Make sure the disks are not encrypted. Reduce periodic activity of server applications. You can do so by editing the respective timers. For more information, see Multimedia Timers . Close the Server Manager application on the VM. Disable the antivirus software. Note that disabling the antivirus might compromise the security of the VM. Disable the screen saver. Keep the Windows OS on the sign-in screen when not in use. 18.3. Enabling standard hardware security on Windows virtual machines To secure Windows virtual machines (VMs), you can enable basic level security by using the standard hardware capabilities of the Windows device. Prerequisites Make sure you have installed the latest WHQL certified VirtIO drivers. Make sure the VM's firmware supports UEFI boot. Install the edk2-OVMF package on your host machine. Install the vTPM packages on your host machine. Make sure the VM is using the Q35 machine architecture. Make sure you have the Windows installation media. Procedure Enable TPM 2.0 by adding the following parameters to the <devices> section in the VM's XML configuration. <devices> [...] <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> [...] </devices> Install Windows in UEFI mode. For more information about how to do so, see Creating a SecureBoot virtual machine . Install the VirtIO drivers on the Windows VM. For more information about how to do so, see Installing virtio drivers on a Windows guest . In UEFI, enable Secure Boot. For more information about how to do so, see Secure Boot . Verification Ensure that the Device Security page on your Windows machine displays the following message: Settings > Update & Security > Windows Security > Device Security 18.4. steps To share files between your RHEL 8 host and its Windows VMs, you can use NFS .
[ "--disk path= /usr/share/virtio-win/virtio-win.iso ,device=cdrom", "--os-variant win10", "osinfo-query os", "--boot uefi --tpm model=tpm-crb,backend.type=emulator,backend.version=2.0", "virsh edit windows-vm", "<os firmware='efi' > <type arch='x86_64' machine='pc-q35-6.2'>hvm</type> <boot dev='hd'/> </os>", "<devices> <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> </devices>", "subscription-manager refresh All local data refreshed", "yum install -y virtio-win", "yum upgrade -y virtio-win", "ls /usr/share/virtio-win/ drivers/ guest-agent/ virtio-win-1.9.9.iso virtio-win.iso", "virt-xml WindowsVM --add-device --disk virtio-win.iso,device=cdrom Domain 'WindowsVM' defined successfully.", "C:\\WINDOWS\\system32\\netsh dump > backup.txt", "C:\\WINDOWS\\system32\\msiexec.exe /i X :\\virtio-win-gt-x86.msi /passive /norestart", "C:\\WINDOWS\\system32\\netsh -f backup.txt", "virsh edit windows-vm", "<features> [...] <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> [...] </features>", "<clock offset='localtime'> <timer name='hypervclock' present='yes'/> </clock>", "<hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <runtime state='on' /> <synic state='on'/> <stimer state='on'> <direct state='on'/> </stimer> <frequencies state='on'/> </hyperv> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> </clock>", "bcdedit /set useplatformclock No", "{PackageManagerCommand} install edk2-ovmf", "{PackageManagerCommand} install swtpm libtpms", "<devices> [...] <tpm model='tpm-crb'> <backend type='emulator' version='2.0'/> </tpm> [...] </devices>", "Your device meets the requirements for standard hardware security." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/installing-and-managing-windows-virtual-machines-on-rhel_configuring-and-managing-virtualization
4.23. Intel Modular
4.23. Intel Modular Table 4.24, "Intel Modular" lists the fence device parameters used by fence_intelmodular , the fence agent for Intel Modular. Table 4.24. Intel Modular luci Field cluster.conf Attribute Description Name name A name for the Intel Modular device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.18, "Intel Modular" shows the configuration screen for adding an Intel Modular fence device. Figure 4.18. Intel Modular The following command creates a fence device instance for an Intel Modular device: The following is the cluster.conf entry for the fence_intelmodular device:
[ "ccs -f cluster.conf --addfencedev intelmodular1 agent=fence_intelmodular community=private ipaddr=192.168.0.1 login=root passwd=password123 snmp_priv_passwd=snmpasswd123 power_wait=60 udpport=161", "<fencedevices> <fencedevice agent=\"fence_intelmodular\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"intelmodular1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"snmpasswd123\" udpport=\"161\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-intelmodular-ca
Chapter 1. About Red Hat OpenShift GitOps
Chapter 1. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat {OCP} and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift GitOps documentation is now available as a separate documentation set at Red Hat OpenShift GitOps .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/gitops/about-redhat-openshift-gitops
Chapter 9. Premigration checklists
Chapter 9. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 9.1. Resources ❏ If your application uses an internal service network or an external route for communicating with services, the relevant route exists. ❏ If your application uses cluster-level resources, you have re-created them on the target cluster. ❏ You have excluded persistent volumes (PVs), image streams, and other resources that you do not want to migrate. ❏ PV data has been backed up in case an application displays unexpected behavior after migration and corrupts the data. 9.2. Source cluster ❏ The cluster meets the minimum hardware requirements . ❏ You have installed the correct legacy Migration Toolkit for Containers Operator version: operator-3.7.yml on OpenShift Container Platform version 3.7. operator.yml on OpenShift Container Platform versions 3.9 to 4.5. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have performed all the run-once tasks . ❏ You have performed all the environment health checks . ❏ You have checked for PVs with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ You have removed old builds, deployments, and images from each namespace to be migrated by pruning . ❏ The OpenShift image registry uses a supported storage type . ❏ Direct image migration only: The OpenShift image registry is exposed to external traffic. ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The identity provider is working. ❏ You have set the value of the openshift.io/host.generated annotation parameter to true for each OpenShift Container Platform route, which updates the host name of the route for the target cluster. Otherwise, the migrated routes retain the source cluster host name. 9.3. Target cluster ❏ You have installed Migration Toolkit for Containers Operator version 1.5.1. ❏ All MTC prerequisites are met. ❏ The cluster meets the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ The cluster has storage classes defined for the storage types used by the source cluster, for example, block volume, file system, or object storage. Note NFS does not require a defined storage class. ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. If an application uses an internal image in the openshift namespace that is not supported by OpenShift Container Platform 4.15, you can manually update the OpenShift Container Platform 3 image stream tag with podman . ❏ The target cluster and the replication repository have sufficient storage space. ❏ The identity provider is working. ❏ DNS records for your application exist on the target cluster. ❏ Certificates that your application uses exist on the target cluster. ❏ You have configured appropriate firewall rules on the target cluster. ❏ You have correctly configured load balancing on the target cluster. ❏ If you migrate objects to an existing namespace on the target cluster that has the same name as the namespace being migrated from the source, the target namespace contains no objects of the same name and type as the objects being migrated. Note Do not create namespaces for your application on the target cluster before migration because this might cause quotas to change. 9.4. Performance ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The memory and CPU usage of the nodes are healthy. ❏ The etcd disk performance of the clusters has been checked with fio .
[ "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/premigration-checklists-3-4
Chapter 2. Overview of the Ansible Automation Platform 2.4 release
Chapter 2. Overview of the Ansible Automation Platform 2.4 release 2.1. New features and enhancements Ansible Automation Platform 2.4 includes the following enhancements: Previously, the execution environment container images were based on RHEL 8 only. With Ansible Automation Platform 2.4 onwards, the execution environment container images are now also available on RHEL 9. The execution environment includes the following container images: ansible-python-base ansible-python-toolkit ansible-builder ee-minimal ee-supported The ansible-builder project recently released Ansible Builder version 3, a much-improved and simplified approach to creating execution environments. You can use the following configuration YAML keys with Ansible Builder version 3: additional_build_files additional_build_steps build_arg_defaults dependencies images options version Ansible Automation Platform 2.4 and later versions can now run on ARM platforms, including both the control plane and the execution environments. Added an option to configure the SSO logout URL for automation hub if you need to change it from the default value. Updated the ansible-lint RPM package to version 6.14.3. Updated Django for potential denial-of-service vulnerability in file uploads ( CVE-2023-24580 ). Updated sqlparse for ReDOS vulnerability ( CVE-2023-30608 ). Updated Django for potential denial-of-service in Accept-Language headers ( CVE-2023-23969 ). Ansible Automation Platform 2.4 adds the ability to install automation controller, automation hub, and Event-Driven Ansible on IBM Power (ppc64le), IBM Z (s390x), and IBM(R) LinuxONE (s390x) architectures. Additional resources For more information about using Ansible Builder version 3, see Ansible Builder Documentation and Execution Environment Setup Reference . 2.2. Technology Preview Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following are Technology Preview features: Starting with Ansible Automation Platform 2.4, the Platform Resource Operator can be used to create the following resources in automation controller by applying YAML to your OpenShift cluster: Inventories Projects Instance Groups Credentials Schedules Workflow Job Templates Launch Workflows You can now configure the Controller Access Token for each resource with the connection_secret parameter, rather than the tower_auth_secret parameter. This change is compatible with earlier versions, but the tower_auth_secret parameter is now deprecated and will be removed in a future release. Additional resources For the most recent list of Technology Preview features, see Ansible Automation Platform - Preview Features . For information about execution node enhancements on OpenShift deployments, see Managing Capacity With Instances . 2.3. Deprecated and removed features Deprecated functionality is still included in Ansible Automation Platform and continues to be supported. However, the functionality will be removed in a future release of Ansible Automation Platform and is not recommended for new deployments. The following functionality was deprecated and removed in Ansible Automation Platform 2.4: On-premise component automation services catalog is now removed from Ansible Automation Platform 2.4 onwards. With the Ansible Automation Platform 2.4 release, the execution environment container image for Ansible 2.9 ( ee-29-rhel-8 ) is no longer loaded into the automation controller configuration by default. Although you can still synchronize content, the use of synclists is deprecated and will be removed in a later release. Instead, private automation hub administrators can upload manually-created requirements files from the rh-certified remote. You can now configure the Controller Access Token for each resource with the connection_secret parameter, rather than the tower_auth_secret parameter. This change is compatible with earlier versions, but the tower_auth_secret parameter is now deprecated and will be removed in a future release. Smart inventories have been deprecated in favor of constructed inventories and will be removed in a future release. 2.4. Bug fixes Ansible Automation Platform 2.4 includes the following bug fixes: Updated the installation program to ensure that collection auto signing cannot be enabled without enabling the collection signing service. Fixed an issue with restoring backups when the installed automation controller version is different from the backup version. Fixed an issue with not adding user defined galaxy-importer settings to galaxy-importer.cfg file. Added missing X-Forwarded-For header information to nginx logs. Removed unnecessary receptor peer name validation when IP address is used as the name. Updated the outdated base_packages.txt file that is included in the bundle installer. Fixed an issue where upgrading the Ansible Automation Platform did not update the nginx package by default. Fixed an issue where an awx user was created without creating an awx group on execution nodes. Fixed the assignment of package version variable to work with flat file inventories. Added a FQDN check for the automation hub hostname required to run the Skopeo commands. Fixed the front end URL for Red Hat Single Sign On (SSO) so it is now properly configured after you specify the sso_redirect_host variable. Fixed the variable precedence for all component nginx_tls_files_remote variables. Fixed the setup.sh script to escalate privileges if necessary for installing Ansible Automation Platform. Fixed an issue when restoring a backup to an automation hub with a different hostname.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/overview_of_the_ansible_automation_platform_2_4_release
12.6. Override Execution Properties
12.6. Override Execution Properties You can override execution properties for any translator in the vdb.xml file: The above XML fragment is overriding the oracle translator and altering the behavior of RequiresCriteria property setting it to true. Note that the modified translator is only available in the scope of this VDB.
[ "<translator type=\"oracle-override\" name=\"oracle\"> <property name=\"RequiresCriteria\" value=\"true\"/> </translator>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/override_execution_properties
Chapter 3. Deploy using local storage devices
Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware
Chapter 12. Node maintenance
Chapter 12. Node maintenance 12.1. About node maintenance 12.1.1. About node maintenance mode Nodes can be placed into maintenance mode using the oc adm utility, or using NodeMaintenance custom resources (CRs). Note The node-maintenance-operator (NMO) is no longer shipped with OpenShift Virtualization. It is now available to deploy as a standalone Operator from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). Placing a node into maintenance marks the node as unschedulable and drains all the virtual machines and pods from it. Virtual machine instances that have a LiveMigrate eviction strategy are live migrated to another node without loss of service. This eviction strategy is configured by default in virtual machine created from common templates but must be configured manually for custom virtual machines. Virtual machine instances without an eviction strategy are shut down. Virtual machines with a RunStrategy of Running or RerunOnFailure are recreated on another node. Virtual machines with a RunStrategy of Manual are not automatically restarted. Important Virtual machines must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 12.1.2. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 12.1.3. Additional resources Installing the Node Maintenance Operator by using the CLI Setting a node to maintenance mode Resuming a node from maintenance mode About RunStrategies for virtual machines Virtual machine live migration Configuring virtual machine eviction strategy 12.2. Automatic renewal of TLS certificates All TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually. 12.2.1. TLS certificates automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: KubeVirt certificates are renewed daily. Containerized Data Importer controller (CDI) certificates are renewed every 15 days. MAC pool certificates are renewed every year. Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption: Migrations Image uploads VNC and console connections 12.3. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node. 12.3.1. About node labeling for obsolete CPU models The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: Example 12.1. Obsolete CPU models This predefined list is not visible in the HyperConverged CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels field of the HyperConverged CR. 12.3.2. About node labeling for CPU features Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example: An environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , each base CPU feature for Penryn is compared to the list of CPU features supported by Haswell . Example 12.2. CPU features supported by Penryn Example 12.3. CPU features supported by Haswell If both Penryn and Haswell support a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only by Haswell and not by Penryn . Example 12.4. Node labels created for CPU features after iteration 12.3.3. Configuring obsolete CPU models You can configure a list of obsolete CPU models by editing the HyperConverged custom resource (CR). Procedure Edit the HyperConverged custom resource, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>" 2 1 Replace the example values in the cpuModels array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. 2 Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value, Penryn is used by default. 12.4. Preventing node reconciliation Use skip-node annotation to prevent the node-labeller from reconciling a node. 12.4.1. Using skip-node annotation If you want the node-labeller to skip a node, annotate that node by using the oc CLI. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Annotate the node that you want to skip by running the following command: USD oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1 1 Replace <node_name> with the name of the relevant node to skip. Reconciliation resumes on the cycle after the node annotation is removed or set to false. 12.4.2. Additional resources Managing node labeling for obsolete CPU models
[ "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/node-maintenance
Chapter 1. Overview of images
Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own OpenShift image registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application does not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application.
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/images/overview-of-images
20.45. Display or Set Block I/O Parameters
20.45. Display or Set Block I/O Parameters The blkiotune command sets or displays the I/O parameters for a specified guest virtual machine. The following format should be used: More information on this command can be found in the Virtualization Tuning and Optimization Guide
[ "virsh blkiotune domain [--weight weight ] [--device-weights device-weights ] [---device-read-iops-sec -device-read-iops-sec ] [--device-write-iops-sec device-write-iops-sec ] [--device-read-bytes-sec device-read-bytes-sec ] [--device-write-bytes-sec device-write-bytes-sec ] [[--config] [--live] | [--current]]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-display_or_set_block_io_parameters
Chapter 5. Job Template Examples and Extensions
Chapter 5. Job Template Examples and Extensions Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements. 5.1. Customizing Job Templates When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones. The following template combines default templates to install and start the httpd service on Red Hat Enterprise Linux systems: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %> The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %> With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set , and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list. 5.2. Default Job Template Categories Job template category Description Packages Templates for performing package related actions. Install, update, and remove actions are included by default. Puppet Templates for executing Puppet runs on target hosts. Power Templates for performing power related actions. Restart and shutdown actions are included by default. Commands Templates for executing custom commands on remote hosts. Services Templates for performing service related actions. Start, stop, restart, and status actions are included by default. Katello Templates for performing content related actions. These templates are used mainly from different parts of the Satellite web UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. 5.3. Example restorecon Template This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts. In the Satellite web UI, navigate to Hosts > Job templates . Click New Job Template . Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor: restorecon -RvF <%= input("directory") %> The <%= input("directory") %> string is replaced by a user-defined directory during job invocation. On the Job tab, set Job category to Commands . Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor. Click Required so that the command cannot be executed without the user specified parameter. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon . Click Submit . See Executing a restorecon Template on Multiple Hosts for information on how to execute a job based on this template. 5.4. Rendering a restorecon Template This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template . This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Run Command - restorecon", :directory => "/home") %> 5.5. Executing a restorecon Template on Multiple Hosts This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory. In the Satellite web UI, navigate to Hosts > All hosts and select target hosts. Select Schedule Remote Job from the Select Action list. In the Job invocation page, select the Commands job category and the Run Command - restorecon job template. Type /home in the directory field. Set Schedule to Execute now . Click Submit . You are taken to the Job invocation page where you can monitor the status of job execution. 5.6. Including Power Actions in Templates This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents Satellite from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Power Action - SSH Default", :action => "restart") %>
[ "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_red_hat_satellite_to_use_ansible/job_template_examples_and_extensions_ansible
Chapter 14. Managing vulnerabilities
Chapter 14. Managing vulnerabilities 14.1. Vulnerability management overview Security vulnerabilities in your environment might be exploited by an attacker to perform unauthorized actions such as carrying out a denial of service attack, executing remote code, or gaining unauthorized access to sensitive data. Therefore, the management of vulnerabilities is a foundational step towards a successful Kubernetes security program. 14.1.1. Vulnerability management process Vulnerability management is a continuous process to identify and remediate vulnerabilities. Red Hat Advanced Cluster Security for Kubernetes helps you to facilitate a vulnerability management process. A successful vulnerability management program often includes the following critical tasks: Performing asset assessment Prioritizing the vulnerabilities Assessing the exposure Taking action Continuously reassessing assets Red Hat Advanced Cluster Security for Kubernetes helps organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. It provides organizations with the contextual information they need to prioritize and act on vulnerabilities in their environment more effectively. 14.1.1.1. Performing asset assessment Performing an assessment of an organization's assets involve the following actions: Identifying the assets in your environment Scanning these assets to identify known vulnerabilities Reporting on the vulnerabilities in your environment to impacted stakeholders When you install Red Hat Advanced Cluster Security for Kubernetes on your Kubernetes or OpenShift Container Platform cluster, it first aggregates the assets running inside of your cluster to help you identify those assets. RHACS allows organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. RHACS provides organizations with the contextual information to prioritize and act on vulnerabilities in their environment more effectively. Important assets that should be monitored by the organization's vulnerability management process using RHACS include: Components : Components are software packages that may be used as part of an image or run on a node. Components are the lowest level where vulnerabilities are present. Therefore, organizations must upgrade, modify or remove software components in some way to remediate vulnerabilities. Images : A collection of software components and code that create an environment to run an executable portion of code. Images are where you upgrade components to fix vulnerabilities. Nodes : A server used to manage and run applications using OpenShift or Kubernetes and the components that make up the OpenShift Container Platform or Kubernetes service. RHACS groups these assets into the following structures: Deployment : A definition of an application in Kubernetes that may run pods with containers based on one or many images. Namespace : A grouping of resources such as Deployments that support and isolate an application. Cluster : A group of nodes used to run applications using OpenShift or Kubernetes. RHACS scans the assets for known vulnerabilities and uses the Common Vulnerabilities and Exposures (CVE) data to assess the impact of a known vulnerability. 14.1.1.2. Prioritizing the vulnerabilities Answer the following questions to prioritize the vulnerabilities in your environment for action and investigation: How important is an affected asset for your organization? How severe does a vulnerability need to be for investigation? Can the vulnerability be fixed by a patch for the affected software component? Does the existence of the vulnerability violate any of your organization's security policies? The answers to these questions help security and development teams decide if they want to gauge the exposure of a vulnerability. Red Hat Advanced Cluster Security for Kubernetes provides you the means to facilitate the prioritization of the vulnerabilities in your applications and components. 14.1.1.3. Assessing the exposure To assess your exposure to a vulnerability, answer the following questions: Is your application impacted by a vulnerability? Is the vulnerability mitigated by some other factor? Are there any known threats that could lead to the exploitation of this vulnerability? Are you using the software package which has the vulnerability? Is spending time on a specific vulnerability and the software package worth it? Take some of the following actions based on your assessment: Consider marking the vulnerability as a false positive if you determine that there is no exposure or that the vulnerability does not apply in your environment. Consider if you would prefer to remediate, mitigate or accept the risk if you are exposed. Consider if you want to remove or change the software package to reduce your attack surface. 14.1.1.4. Taking action Once you have decided to take action on a vulnerability, you can take one of the following actions: Remediate the vulnerability Mitigate and accept the risk Accept the risk Mark the vulnerability as a false positive You can remediate vulnerabilities by performing one of the following actions: Remove a software package Update a software package to a non-vulnerable version 14.2. Viewing and addressing vulnerabilities Common vulnerability management tasks involve identifying and prioritizing vulnerabilities, remedying them, and monitoring for new threats. 14.2.1. Viewing vulnerabilities Historically, RHACS provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. For more information about the dashboard, see Using the vulnerability management dashboard . The Vulnerability Management Workload CVEs page provides information about vulnerabilities in applications running on clusters in your system. You can view vulnerability information across images and deployments. The Workload CVEs page provides advanced filtering capabilities, including the ability to view images and deployments with vulnerabilities and filter by image, deployment, namespace, cluster, CVE, component, and component source. 14.2.2. Viewing workload CVEs The Vulnerability Management Workload CVEs page provides information about vulnerabilities in applications running on clusters in your system. You can view vulnerability information across images and deployments. The Workload CVEs page provides more advanced filtering capabilities than the dashboard, including the ability to view images and deployments with vulnerabilities and filter by image, deployment, namespace, cluster, CVE, component, and component source. Procedure To show all CVEs across all images, select Image vulnerabilities from the View image vulnerabilities list. From the View image vulnerabilities list, select how you want to view the images. The following options are provided: Image vulnerabilities : Displays images and deployments in which RHACS has discovered CVEs. Images without vulnerabilities : Displays images that meet at least one of the following conditions: Images that do not have CVEs Images that report a scanner error that may result in a false negative of no CVEs Note An image that actually contains vulnerabilities can appear in this list inadvertently. For example, if Scanner was able to scan the image and it is known to RHACS, but the scan was not successfully completed, vulnerabilities cannot be detected. This scenario occurs if an image has an operating system that is not supported by the RHACS scanner. Scan errors are displayed when you hover over an image in the image list or click the image name for more information. To filter CVEs by entity, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 14.1. CVE filtering Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. You can select from the following options for the severity level: is greater than is greater than or equal to is equal to is less than or equal to is less than Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Namespace Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. You can select the following options to refine the list of results: Prioritize by namespace view : Displays a list of namespaces sorted according to the risk priority. You can use this view to quickly identify and address the most critical areas. In this view, click <number> deployments in a table row to return to the workload CVE list view, with filters applied to show only deployments, images and CVEs for the selected namespace. Default filters : You can select filters for CVE severity and CVE status that are automatically applied when you visit the Workload CVEs page. These filters only apply to this page, and are applied when you visit the page from another section of the RHACS web portal or from a bookmarked URL. They are saved in the local storage of your browser. CVE severity : You can select one or more levels. CVE status : You can select Fixable or Not fixable . Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE, image name, or deployment name to view more information about the item. For example, depending on the item type, you can view the following information: Whether a CVE is fixable Whether an image is active The Dockerfile line in the image that contains the CVE External links to information about the CVE in Red Hat and other CVE databases Search example The following graphic shows an example of search criteria for a cluster called staging-secured-cluster to view CVEs of critical and important severity with a fixable status in that cluster. 14.2.3. Viewing Node CVEs You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, click Vulnerability Management Node CVEs . To view the data, do any of the following tasks: To view a list of all the CVEs affecting all of your nodes, select <number> CVEs . To view a list of nodes that contain CVEs, select <number> Nodes . Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 14.2. CVE filtering Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. You can select from the following options for the severity level: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . Optional: To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes. 14.2.3.1. Disabling identifying vulnerabilities in nodes Identifying vulnerabilities in nodes is enabled by default. You can disable it from the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under Image Integrations , select StackRox Scanner . From the list of scanners, select StackRox Scanner to view its details. Click Edit . To use only the image scanner and not the node scanner, click Image Scanner . Click Save . Additional resources Supported operating systems 14.2.4. Viewing platform CVEs The platform CVEs page provides information about vulnerabilities in clusters in your system. Procedure Click Vulnerability Management Platform CVEs . You can filter CVEs by entity by selecting the appropriate filters and attributes. You can select multiple entities and attributes by clicking the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 14.3. CVE filtering Entity Attributes Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. You can select from the following options for the severity level: is greater than is greater than or equal to is equal to is less than or equal to is less than Type : The type of CVE: Kubernetes CVE Istio CVE OpenShift CVE To filter by CVE status, click CVE status and select Fixable or Not fixable . Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE to view more information about the item. For example, you can view the following information if it is populated: Documentation for the CVE External links to information about the CVE in Red Hat and other CVE databases Whether the CVE is fixable or unfixable A list of affected clusters 14.2.5. Excluding CVEs You can exclude or ignore CVEs in RHACS by snoozing node and platform CVEs and deferring or marking node, platform, and image CVEs as false positives. You might want to exclude CVEs if you know that the CVE is a false positive or you have already taken steps to mitigate the CVE. Snoozed CVEs do not appear in vulnerability reports or trigger policy violations. You can snooze a CVE to ignore it globally for a specified period of time. Snoozing a CVE does not require approval. Note Snoozing node and platform CVEs requires that the ROX_VULN_MGMT_LEGACY_SNOOZE environment variable is set to true . Deferring or marking a CVE as a false positive is done through the exception management workflow. This workflow provides the ability to view pending, approved, and denied deferral and false positive requests. You can scope the CVE exception to a single image, all tags for a single image, or globally for all images. When approving or denying a request, you must add a comment. A CVE remains in the observed status until the exception request is approved. A pending request for deferral that is denied by another user is still visible in reports, policy violations, and other places in the system, but is indicated by a Pending exception label to the CVE when visiting Vulnerability Management Workload CVEs . An approved exception for a deferral or false positive has the following effects: Removes the CVE from the Observed tab in Vulnerability Management Workflow CVEs to either the Deferred or False positive tab Prevents the CVE from triggering policy violations that are related to the CVE Prevents the CVE from showing up in automatically generated vulnerability reports 14.2.5.1. Snoozing platform and node CVEs You can snooze platform and node CVEs that do not relate to your infrastructure. You can snooze CVEs for 1 day, 1 week, 2 weeks, 1 month, or indefinitely, until you unsnooze them. Snoozing a CVE takes effect immediately and does not require an additional approval step. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view platform CVEs, click Vulnerability Management Platform CVEs . To view node CVEs, click Vulnerability Management Node CVEs . Select one or more CVEs. Select the appropriate method to snooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Snooze CVE . If you selected multiple CVEs, click Bulk actions Snooze CVEs . Select the duration of time to snooze. Click Snooze CVEs . You receive a confirmation that you have requested to snooze the CVEs. 14.2.5.2. Unsnoozing platform and node CVEs You can unsnooze platform and node CVEs that you have previously snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . To view the list of snoozed CVEs, click Show snoozed CVEs in the header view. Select one or more CVEs from the list of snoozed CVEs. Select the appropriate method to unsnooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Unsnooze CVE . If you selected multiple CVEs, click Bulk actions Unsnooze CVEs . Click Unsnooze CVEs again. You receive a confirmation that you have requested to unsnooze the CVEs. 14.2.5.3. Viewing snoozed CVEs You can view a list of platform and node CVEs that have been snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . Click Show snoozed CVEs to view the list. 14.2.5.4. Marking a vulnerability as a false positive globally You can create an exception for a vulnerability by marking it as a false positive globally, or across all images. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.5. Marking a vulnerability as a false positive for an image or image tag To create an exception for a vulnerability, you can mark it as a false positive for a single image, or across all tags associated with an image. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . To view the list of images, click <number> Images . Find the row that lists the image that you want to mark as a false positive, and click the image name. Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Select the scope. You can select either all tags associated with the image or only the image. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.6. Viewing deferred and false positive CVEs You can view the CVEs that have been deferred or marked as false positives by using the Workload CVEs page. Procedure To see CVEs that have been deferred or marked as false positives, with the exceptions approved by an approver, click Vulnerability Management Workload CVEs . Complete any of the following actions: To see CVEs that have been deferred, click the Deferred tab. To see CVEs that have been marked as false positives, click the False positives tab. Note To approve, deny, or change deferred or false positive CVEs, click Vulnerability Management Exception Management . Optional: To view additional information about the deferral or false positive, click View in the Request details column. The Exception Management page is displayed. 14.2.5.7. Deferring CVEs You can accept risk with or without mitigation and defer CVEs. You must get deferral requests approved in the exception management workflow. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Choose the appropriate method to defer a CVE: If you want to defer a single CVE, perfom the following steps: Find the row which contains the CVE that you want to mark as a false positive. Click the overflow menu, , for the CVE that you identified, and then click Defer CVE . If you want to defer multiple CVEs, perform the following steps: Select each CVE. Click Bulk actions Defer CVEs . Select the time period for the deferral. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception menu, click CVE selections . Click Submit request . You receive a confirmation that you have requested a deferral. Optional: To copy the approval link to share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.7.1. Configuring vulnerability exception expiration periods You can configure the time periods available for vulnerability management exceptions. These options are available when users request to defer a CVE. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, go to Platform Configuration Exception Configuration . You can configure expiration times that users can select when they request to defer a CVE. Enabling a time period makes it available to users and disabling it removes it from the user interface. 14.2.5.8. Reviewing and managing an exception request to defer or mark a CVE as false positive You can review, update, approve, or deny an exception requests for deferring and marking CVEs as false positives. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure To view the list of pending requests, do any of the following tasks: Paste the approval link into your browser. Click Vulnerability Management Exception Management , and then click the request name in the Pending requests tab. Review the scope of the vulnerability and decide whether or not to approve it. Choose the appropriate option to manage a pending request: If you want to deny the request and return the CVE to observed status, click Deny request . Enter a rationale for the denial, and click Deny . If you want to approve the request, click Approve request . Enter a rationale for the approval, and click Approve . To cancel a request that you have created and return the CVE to observed status, click Cancel request . You can only cancel requests that you have created. To update the deferral time period or rationale for a request that you have created, click Update request . You can only update requests that you have created. After you make changes, click Submit request . You receive a confirmation that you have submitted a request. 14.2.6. Identifying Dockerfile lines in images that introduced components with CVEs You can identify specific Dockerfile lines in an image that introduced components with CVEs. Procedure To view a problematic line: In the RHACS portal, click Vulnerability Management Workload CVEs . Click the tab to view the type of CVEs. The following tabs are available: Observed Deferred False positives In the list of CVEs, click the CVE name to open the page containing the CVE details. The Affected components column lists the components that include the CVE. Expand the CVE to display additional information, including the Dockerfile line that introduced the component. 14.2.7. Finding a new component version The following procedure finds a new component version to upgrade to. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Click <number> Images and select an image. To view additional information, locate the CVE and click the expand icon. The additional information includes the component that the CVE is in and the version in which the CVE is fixed, if it is fixable. Update your image to a later version. 14.2.8. Exporting workload vulnerabilities by using the API You can export workload vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the API. For these examples, workloads are composed of deployments and their associated images. The export uses the /v1/export/vuln-mgmt/workloads streaming API. It allows the combined export of deployments and images. The images payload contains the full vulnerability information. The output is streamed and has the following schema: {"result": {"deployment": {...}, "images": [...]}} ... {"result": {"deployment": {...}, "images": [...]}} The following examples assume that these environment variables have been set: ROX_API_TOKEN : API token with view permissions for the Deployment and Image resources ROX_ENDPOINT : Endpoint under which Central's API is available To export all workloads, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads To export all workloads with a query timeout of 60 seconds, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60 To export all workloads matching the query Deployment:app Namespace:default , enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault Additional resources Searching and filtering 14.2.8.1. Scanning inactive images Red Hat Advanced Cluster Security for Kubernetes (RHACS) scans all active (deployed) images every 4 hours and updates the image scan results to reflect the latest vulnerability definitions. You can also configure RHACS to scan inactive (not deployed) images automatically. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Click Manage watched images . In the Image name field, enter the fully-qualified image name that begins with the registry and ends with the image tag, for example, docker.io/library/nginx:latest . Click Add image to watch list . Optional: To remove a watched image, locate the image in the Manage watched images window, and click Remove watch . Important In the RHACS portal, click Platform Configuration System Configuration to view the data retention configuration. All the data related to the image removed from the watched image list continues to appear in the RHACS portal for the number of days mentioned on the System Configuration page and is only removed after that period is over. Click Close to return to the Workload CVEs page. 14.3. Vulnerability reporting You can create and download an on-demand image vulnerability report from the Vulnerability Management Vulnerability Reporting menu in the RHACS web portal. This report contains a comprehensive list of common vulnerabilities and exposures in images and deployments, referred to as workload CVEs in RHACS. To share this report with auditors or internal stakeholders, you can schedule emails in RHACS or download the report and share it by using other methods. 14.3.1. Reporting vulnerabilities to teams As organizations must constantly reassess and report on their vulnerabilities, some organizations find it helpful to have scheduled communications to key stakeholders to help in the vulnerability management process. You can use Red Hat Advanced Cluster Security for Kubernetes to schedule these reoccurring communications through e-mail. These communications should be scoped to the most relevant information that the key stakeholders need. For sending these communications, you must consider the following questions: What schedule would have the most impact when communicating with the stakeholders? Who is the audience? Should you only send specific severity vulnerabilities in your report? Should you only send fixable vulnerabilities in your report? 14.3.2. Creating vulnerability management report configurations RHACS guides you through the process of creating a vulnerability management report configuration. This configuration determines the information that will be included in a report job that runs at a scheduled time or that you run on demand. Procedure In the RHACS portal, click Vulnerability Management Vulnerability Reporting . Click Create report . Enter a name for your report configuration in the Report name field. Optional: Enter text describing the report configuration in the Report description field. In the CVE severity field, select the severity of common vulnerabilities and exposures (CVEs) that you want to include in the report configuration. Select the CVE status . You can select Fixable , Unfixable , or both. In the Image type field, select whether you want to include CVEs from deployed images, watched images, or both. In the CVEs discovered since field, select the time period for which you want CVEs to be included in the report configuration. In the Configure collection included field, you must configure at least one collection. Complete any of the following actions: Select an existing collection to include. To view the collection information, edit the collection, and get a preview of collection results, click View . When viewing the collection, entering text in the field searches for collections matching that text string. Click Create collection to create a new collection. Note For more information about collections, see "Creating and using deployment collections" in the "Additional resources" section. Click to configure the delivery destinations and optionally set up a schedule for delivery. 14.3.2.1. Configuring delivery destinations and scheduling Configuring destinations and delivery schedules for vulnerability reports is optional, unless on the page, you selected the option to include CVEs that were discovered since the last scheduled report. If you selected that option, configuring destinations and delivery schedules for vulnerability reports is required. Procedure To configure destinations for delivery, in the Configure delivery destinations section, you can add a delivery destination and set up a schedule for reporting. To email reports, you must configure at least one email notifier. Select an existing notifier or create a new email notifier to send your report by email. For more information about creating an email notifier, see "Configuring the email plugin" in the "Additional resources" section. When you select a notifier, the email addresses configured in the notifier as Default recipients appear in the Distribution list field. You can add additional email addresses that are separated by a comma. A default email template is automatically applied. To edit this default template, perform the following steps: Click the edit icon and enter a customized subject and email body in the Edit tab. Click the Preview tab to see your proposed template. Click Apply to save your changes to the template. Note When reviewing the report jobs for a specific report, you can see whether the default template or a customized template was used when creating the report. In the Configure schedule section, select the frequency and day of the week for the report. Click to review your vulnerability report configuration and finish creating it. 14.3.2.2. Reviewing and creating the report configuration You can review the details of your vulnerability report configuration before creating it. Procedure In the Review and create section, you can review the report configuration parameters, delivery destination, email template that is used if you selected email delivery, delivery schedule, and report format. To make any changes, click Back to go to the section and edit the fields that you want to change. Click Create to create the report configuration and save it. 14.3.3. Vulnerability report permissions The ability to create, view, and download reports depends on the access control settings, or roles and permission sets, for your user account. For example, you can only view, create, and download reports for data that your user account has permission to access. In addition, the following restrictions apply: You can only download reports that you have generated; you cannot download reports generated by other users. Report permissions are restricted depending on the access settings for user accounts. If the access settings for your account change, old reports do not reflect the change. For example, if you are given new permissions and want to view vulnerability data that is now allowed by those permissions, you must create a new vulnerability report. 14.3.4. Editing vulnerability report configurations You can edit existing vulnerability report configurations from the list of report configurations, or by selecting an individual report configuration first. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . To edit an existing vulnerability report configuration, complete any of the following actions: Locate the report configuration that you want to edit in the list of report configurations. Click the overflow menu, , and then select Edit report . Click the report configuration name in the list of report configurations. Then, click Actions and select Edit report . Make changes to the report configuration and save. 14.3.5. Downloading vulnerability reports You can generate an on-demand vulnerability report and then download it. Note You can only download reports that you have generated; you cannot download reports generated by other users. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration that you want to use to create the downloadable report. Generate the vulnerability report by using one of the following methods: To generate the report from the list: Click the overflow menu, , and then select Generate download . The My active job status column displays the status of your report creation. After the Processing status goes away, you can download the report. To generate the report from the report window: Click the report configuration name to open the configuration detail window. Click Actions and select Generate download . To download the report, if you are viewing the list of report configurations, click the report configuration name to open it. Click All report jobs from the menu on the header. If the report is completed, click the Ready for download link in the Status column. The report is in .csv format and is compressed into a .zip file for download. 14.3.6. Sending vulnerability reports on-demand You can send vulnerability reports immediately, rather than waiting for the scheduled send time. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration for the report that you want to send. Click the overflow menu, , and then select Send report now . 14.3.7. Cloning vulnerability report configurations You can make copies of vulnerability report configurations by cloning them. This is useful when you want to reuse report configurations with minor changes, such as reporting vulnerabilities in different deployments or namespaces. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to clone in the list of report configurations. Click Clone report . Make any changes that you want to the report parameters and delivery destinations. Click Create . 14.3.8. Deleting vulnerability report configurations Deleting a report configuration deletes the configuration and any reports that were previously run using this configuration. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to delete in the list of reports. Click the overflow menu, , and then select Delete report . 14.3.9. Configuring vulnerability management report job retention settings You can configure settings that determine when vulnerability report job requests expire and other retention settings for report jobs. Note These settings do not affect the following vulnerability report jobs: Jobs in the WAITING or PREPARING state (unfinished jobs) The last successful scheduled report job The last successful on-demand emailed report job The last successful downloadable report job Downloadable report jobs for which the report file has not been deleted by either manual deletion or by configuring the downloadable report pruning settings Procedure In the RHACS web portal, go to Platform Configuration System Configuration . You can configure the following settings for vulnerability report jobs: Vulnerability report run history retention : The number of days that a record is kept of vulnerability report jobs that have been run. This setting controls how many days that report jobs are listed in the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. The entire report history after the exclusion date is deleted, with the exception of the following jobs: Unfinished jobs. Jobs for which prepared downloadable reports still exist in the system. The last successful report job for each job type (scheduled email, on-demand email, or download). This ensures users have information about the last run job for each type. Prepared downloadable vulnerability reports retention days : The number of days that prepared, on-demand downloadable vulnerability report jobs are available for download on the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. Prepared downloadable vulnerability reports limit : The limit, in MB, of space allocated to prepared downloadable vulnerability report jobs. After the limit is reached, the oldest report job in the download queue is removed. To change these values, click Edit , make your changes, and then click Save . 14.3.10. Additional resources Creating and using deployment collections Migration of access scopes to collections Configuring the email plugin 14.4. Using the vulnerability management dashboard (deprecated) Historically, RHACS has provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. With the dashboard, you can view vulnerabilities by image, node, or platform. You can also view vulnerabilities by clusters, namespaces, deployments, node components, and image components. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. Important To perform actions on vulnerabilities, such as view additional information about a vulnerability, defer a vulnerability, or mark a vulnerability as a false positive, click Vulnerability Management Workload CVEs . To review requests for deferring and marking CVEs as false positives, click Vulnerability Management Exception Management . 14.4.1. Viewing application vulnerabilities by using the dashboard You can view application vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Namespaces or Deployments . From the list, search for and select the Namespace or Deployment you want to review. To get more information about the application, select an entity from Related entities on the right. 14.4.2. Viewing image vulnerabilities by using the dashboard You can view image vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select <number> Images . From the list of images, select the image you want to investigate. You can also filter the list by performing one of the following steps: Enter Image in the search bar and then select the Image attribute. Enter the image name in the search bar. In the image details view, review the listed CVEs and prioritize taking action to address the impacted components. Select Components from Related entities on the right to get more information about all the components that are impacted by the selected image. Or select Components from the Affected components column under the Image findings section for a list of components affected by specific CVEs. 14.4.3. Viewing cluster vulnerabilities by using the dashboard You can view vulnerabilities in clusters by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Clusters . From the list of clusters, select the cluster you want to investigate. Review the cluster's vulnerabilities and prioritize taking action on the impacted nodes on the cluster. 14.4.4. Viewing node vulnerabilities by using the dashboard You can view vulnerabilities in specific nodes by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Nodes . From the list of nodes, select the node you want to investigate. Review vulnerabilities for the selected node and prioritize taking action. To get more information about the affected components in a node, select Components from Related entities on the right. 14.4.5. Finding the most vulnerable image components by using the dashboard Use the Vulnerability Management view for identifying highly vulnerable image components. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Application & Infrastructure Image Components . In the Image Components view, select the Image CVEs column header to arrange the components in descending order (highest first) based on the CVEs count. 14.4.6. Viewing details only for fixable CVEs by using the dashboard Use the Vulnerability Management view to filter and show only the fixable CVEs. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . From the Vulnerability Management view header, under Filter CVEs , click Fixable . 14.4.7. Identifying the operating system of the base image by using the dashboard Use the Vulnerability Management view to identify the operating system of the base image. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Images . View the base operating system (OS) and OS version for all images under the Image OS column. Select an image to view its details. The base operating system is also available under the Image Summary Details and Metadata section. Note Red Hat Advanced Cluster Security for Kubernetes lists the Image OS as unknown when either: The operating system information is not available, or If the image scanner in use does not provide this information. Docker Trusted Registry, Google Container Registry, and Anchore do not provide this information. 14.4.8. Identifying top risky objects by using the dashboard Use the Vulnerability Management view for identifying the top risky objects in your environment. The Top Risky widget displays information about the top risky images, deployments, clusters, and namespaces in your environment. The risk is determined based on the number of vulnerabilities and their CVSS scores. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select the Top Risky widget header to choose between riskiest images, deployments, clusters, and namespaces. The small circles on the chart represent the chosen object (image, deployment, cluster, namespace). Hover over the circles to see an overview of the object they represent. And select a circle to view detailed information about the selected object, its related entities, and the connections between them. For example, if you are viewing Top Risky Deployments by CVE Count and CVSS score , each circle on the chart represents a deployment. When you hover over a deployment, you see an overview of the deployment, which includes deployment name, name of the cluster and namespace, severity, risk priority, CVSS, and CVE count (including fixable). When you select a deployment, the Deployment view opens for the selected deployment. The Deployment view shows in-depth details of the deployment and includes information about policy violations, common vulnerabilities, CVEs, and riskiest images for that deployment. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Risky Deployments by CVE Count and CVSS score , you can select View All to view detailed information about all deployments in your infrastructure. 14.4.9. Identifying top riskiest images and components by using the dashboard Similar to the Top Risky , the Top Riskiest widget lists the names of the top riskiest images and components. This widget also includes the total number of CVEs and the number of fixable CVEs in the listed images. Procedure Go to the RHACS portal and click Vulnerability Management from the navigation menu. Select the Top Riskiest Images widget header to choose between the riskiest images and components. If you are viewing Top Riskiest Images : When you hover over an image in the list, you see an overview of the image, which includes image name, scan time, and the number of CVEs along with severity (critical, high, medium, and low). When you select an image, the Image view opens for the selected image. The Image view shows in-depth details of the image and includes information about CVEs by CVSS score, top riskiest components, fixable CVEs, and Dockerfile for the image. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Riskiest Components , you can select View All to view detailed information about all components in your infrastructure. 14.4.10. Viewing the Dockerfile for an image by using the dashboard Use the Vulnerability Management view to find the root cause of vulnerabilities in an image. You can view the Dockerfile and find exactly which command in the Dockerfile introduced the vulnerabilities and all components that are associated with that single command. The Dockerfile section shows information about: All the layers in the Dockerfile The instructions and their value for each layer The components included in each layer The number of CVEs in components for each layer When there are components introduced by a specific layer, you can select the expand icon to see a summary of its components. If there are any CVEs in those components, you can select the expand icon for an individual component to get more details about the CVEs affecting that component. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of instructions, values, creation date, and components. Select the expand icon for an individual component to view more information. 14.4.11. Identifying the container image layer that introduces vulnerabilities by using the dashboard You can use the Vulnerability Management dashboard to identify vulnerable components and the image layer they appear in. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of image components. Select the expand icon for specific components to get more details about the CVEs affecting the selected component. 14.4.12. Viewing recently detected vulnerabilities by using the dashboard The Recently Detected Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of recently discovered vulnerabilities in your scanned images, based on the scan time and CVSS score. It also includes information about the number of images affected by the CVE and its impact (percentage) on your environment. When you hover over a CVE in the list, you see an overview of the CVE, which includes scan time, CVSS score, description, impact, and whether it's scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Recently Detected Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. 14.4.13. Viewing the most common vulnerabilities by using the dashboard The Most Common Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of vulnerabilities that affect the largest number of deployments and images arranged by their CVSS score. When you hover over a CVE in the list, you see an overview of the CVE which includes, scan time, CVSS score, description, impact, and whether it is scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Most Common Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. To export the CVEs as a CSV file, select Export Download CVES as CSV . 14.4.14. Finding clusters with most Kubernetes and Istio vulnerabilities by using the dashboard You can identify the clusters with most Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in your environment by using the vulnerability management dashboard. Procedure In the RHACS portal, click Vulnerability Management -> Dashboard . The Clusters with most orchestrator and Istio vulnerabilities widget shows a list of clusters, ranked by the number of Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in each cluster. The cluster on top of the list is the cluster with the highest number of vulnerabilities. Click on one of the clusters from the list to view details about the cluster. The Cluster view includes: Cluster Summary section, which shows cluster details and metadata, top risky objects (deployments, namespaces, and images), recently detected vulnerabilities, riskiest images, and deployments with the most severe policy violations. Cluster Findings section, which includes a list of failing policies and list of fixable CVEs. Related Entities section, which shows the number of namespaces, deployments, policies, images, components, and CVEs the cluster contains. You can select these entities to view more details. Click View All on the widget header to view the list of all clusters. 14.4.15. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. Additional resources Supported operating systems 14.4.16. Creating policies to block specific CVEs by using the dashboard You can create new policies or add specific CVEs to an existing policy from the Vulnerability Management view. Procedure Click CVEs from the Vulnerability Management view header. You can select the checkboxes for one or more CVEs, and then click Add selected CVEs to Policy ( add icon) or move the mouse over a CVE in the list, and select the Add icon. For Policy Name : To add the CVE to an existing policy, select an existing policy from the drop-down list box. To create a new policy, enter the name for the new policy, and select Create <policy_name> . Select a value for Severity , either Critical , High , Medium , or Low . Choose the Lifecycle Stage to which your policy is applicable, from Build , or Deploy . You can also select both life-cycle stages. Enter details about the policy in the Description box. Turn off the Enable Policy toggle if you want to create the policy but enable it later. The Enable Policy toggle is on by default. Verify the listed CVEs which are included in this policy. Click Save Policy . 14.5. Scanning RHCOS node hosts For OpenShift Container Platform, Red Hat Enterprise Linux CoreOS (RHCOS) is the only supported operating system for control plane. Whereas, for node hosts, OpenShift Container Platform supports both RHCOS and Red Hat Enterprise Linux (RHEL). With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can scan RHCOS nodes for vulnerabilities and detect potential security threats. RHACS scans RHCOS RPMs installed on the node host, as part of the RHCOS installation, for any known vulnerabilities. First, RHACS analyzes and detects RHCOS components. Then it matches vulnerabilities for identified components by using RHEL and OpenShift 4.X Open Vulnerability and Assessment Language (OVAL) v2 security data streams. Note If you installed RHACS by using the roxctl CLI, you must manually enable the RHCOS node scanning features. When you use Helm or Operator installation methods on OpenShift Container Platform, this feature is enabled by default. Additional resources RHEL Versions Utilized by RHEL CoreOS and OCP 14.5.1. Enabling RHCOS node scanning If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the Secured cluster, you must have installed Secured cluster on OpenShift Container Platform 4.11 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' 14.5.2. Analysis and detection When you use RHACS with OpenShift Container Platform, RHACS creates two coordinating containers for analysis and detection, the Compliance container and the Node-inventory container. The Compliance container was already a part of earlier RHACS versions. However, the Node-inventory container is new with RHACS 4.0 and works only with OpenShift Container Platform cluster nodes. Upon start-up, the Compliance and Node-inventory containers begin the first inventory scan of Red Hat Enterprise Linux CoreOS (RHCOS) software components within five minutes. , the Node-inventory container scans the node's file system to identify installed RPM packages and report on RHCOS software components. Afterward, inventory scanning occurs at periodic intervals, typically every four hours. You can customize the default interval by configuring the ROX_NODE_SCANNING_INTERVAL environment variable for the Compliance container. 14.5.3. Vulnerability matching Central services, which include Central and Scanner, perform vulnerability matching. Scanner uses Red Hat's Open Vulnerability and Assessment Language (OVAL) v2 security data streams to match vulnerabilities on Red Hat Enterprise Linux CoreOS (RHCOS) software components. Unlike the earlier versions, RHACS 4.0 no longer uses the Kubernetes node metadata to find the kernel and container runtime versions. Instead, it uses the installed RHCOS RPMs to assess that information. 14.5.4. Related environment variables You can use the following environment variables to configure RHCOS node scanning on RHACS. Table 14.4. Node-inventory configuration Environment Variable Description ROX_NODE_SCANNING_CACHE_TIME The time after which a cached inventory is considered outdated. Defaults to 90% of ROX_NODE_SCANNING_INTERVAL that is 3h36m . ROX_NODE_SCANNING_INITIAL_BACKOFF The initial time in seconds a node scan will be delayed if a backoff file is found. The default value is 30s . ROX_NODE_SCANNING_MAX_BACKOFF The upper limit of backoff. The default value is 5m, being 50% of Kubernetes restart policy stability timer. Table 14.5. Compliance configuration Environment Variable Description ROX_NODE_SCANNING_INTERVAL The base value of the interval duration between node scans. The deafult value is 4h . ROX_NODE_SCANNING_INTERVAL_DEVIATION The duration of node scans may differ from the base interval time. However, the maximum value is limited by the ROX_NODE_SCANNING_INTERVAL . ROX_NODE_SCANNING_MAX_INITIAL_WAIT The maximum wait time before the first node scan, which is randomly generated. You can set this value to 0 to disable the initial node scanning wait time. The default value is 5m . 14.5.5. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. 14.5.6. Viewing Node CVEs You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, click Vulnerability Management Node CVEs . To view the data, do any of the following tasks: To view a list of all the CVEs affecting all of your nodes, select <number> CVEs . To view a list of nodes that contain CVEs, select <number> Nodes . Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 14.6. CVE filtering Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. You can select from the following options for the severity level: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . Optional: To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes.
[ "{\"result\": {\"deployment\": {...}, \"images\": [...]}} {\"result\": {\"deployment\": {...}, \"images\": [...]}}", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/managing-vulnerabilities
Chapter 8. Updating existing Kubernetes storage objects
Chapter 8. Updating existing Kubernetes storage objects Storage version migration is used to update existing objects in the cluster from their current version to the latest version. The Kube Storage Version Migrator embedded controller is used in MicroShift to migrate resources without having to recreate those resources. Either you or a controller can create a StorageVersionMigration custom resource (CR) that requests a migration through the Migrator Controller. 8.1. Updating stored data to the latest storage version Updating stored data to the latest Kubernetes storage version is called storage migration. For example, updating from v1beta1 to v1beta2 is migration. To update your storage version, use the following procedure. Procedure Either you or any controller that has support for the StorageVersionMigration API must trigger a migration request. Use the following example request for reference: Example request apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: snapshot-v1 spec: resource: group: snapshot.storage.k8s.io resource: volumesnapshotclasses 1 version: v1 2 1 You must use the plural name of the resource. 2 Version being updated to. The progress of the migration is posted to the StorageVersionMigration status. Note Failures can occur because of a misnamed group or resource. Migration failures can also occur when there is an incompatibility between the and latest versions.
[ "apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: snapshot-v1 spec: resource: group: snapshot.storage.k8s.io resource: volumesnapshotclasses 1 version: v1 2" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/storage/microshift-storage-migration
3.7.2. Using Implementations of TLS
3.7.2. Using Implementations of TLS Red Hat Enterprise Linux is distributed with several full-featured implementations of TLS . In this section, the configuration of OpenSSL and GnuTLS is described. See Section 3.7.3, "Configuring Specific Applications" for instructions on how to configure TLS support in individual applications. The available TLS implementations offer support for various cipher suites that define all the elements that come together when establishing and using TLS -secured communications. Use the tools included with the different implementations to list and specify cipher suites that provide the best possible security for your use case while considering the recommendations outlined in Section 3.7.1, "Choosing Algorithms to Enable" . The resulting cipher suites can then be used to configure the way individual applications negotiate and secure connections. Important Be sure to check your settings following every update or upgrade of the TLS implementation you use or the applications that utilize that implementation. New versions may introduce new cipher suites that you do not want to have enabled and that your current configuration does not disable. 3.7.2.1. Working with Cipher Suites in OpenSSL OpenSSL is a toolkit and a cryptography library that support the SSL and TLS protocols. On Red Hat Enterprise Linux, a configuration file is provided at /etc/pki/tls/openssl.cnf . The format of this configuration file is described in config (1) . To get a list of all cipher suites supported by your installation of OpenSSL , use the openssl command with the ciphers subcommand as follows: Pass other parameters (referred to as cipher strings and keywords in OpenSSL documentation) to the ciphers subcommand to narrow the output. Special keywords can be used to only list suites that satisfy a certain condition. For example, to only list suites that are defined as belonging to the HIGH group, use the following command: See the ciphers (1) manual page for a list of available keywords and cipher strings. To obtain a list of cipher suites that satisfy the recommendations outlined in Section 3.7.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command omits all insecure ciphers, gives preference to ephemeral elliptic curve Diffie-Hellman key exchange and ECDSA ciphers, and omits RSA key exchange (thus ensuring perfect forward secrecy ). Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients.
[ "~]USD openssl ciphers -v 'ALL:COMPLEMENTOFALL'", "~]USD openssl ciphers -v 'HIGH'", "~]USD openssl ciphers -v 'kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES' | column -t ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-using_implementations_of_tls
Appendix C. Reverting Satellite Server to download content from Red Hat CDN
Appendix C. Reverting Satellite Server to download content from Red Hat CDN If your environment changes from disconnected to connected, you can reconfigure a disconnected Satellite Server to download content directly from the Red Hat CDN. Procedure In the Satellite web UI, navigate to Content > Subscriptions . Click Manage Manifest . Switch to the CDN Configuration tab. Select Red Hat CDN . Edit the URL field to point to the Red Hat CDN URL: https://cdn.redhat.com Click Update . Satellite Server is now configured to download content from the Red Hat CDN the time that it synchronizes repositories. CLI procedure Log in to the Satellite Server using SSH. Use Hammer to reconfigure the CDN:
[ "hammer organization configure-cdn --name=\" My_Organization \" --type=redhat_cdn" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_disconnected_network_environment/reverting_server_to_download_content_from_red_hat_cdn_satellite
4.14. Hewlett-Packard BladeSystem
4.14. Hewlett-Packard BladeSystem Table 4.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_hpblade , the fence agent for HP BladeSystem. Table 4.15. HP BladeSystem (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name The name assigned to the HP Bladesystem device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the HP BladeSystem device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the HP BladeSystem device. This parameter is required. Password passwd The password used to authenticate the connection to the fence device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Missing port returns OFF instead of failure missing_as_off Missing port returns OFF instead of failure. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Figure 4.11, "HP BladeSystem" shows the configuration screen for adding an HP BladeSystem fence device. Figure 4.11. HP BladeSystem The following command creates a fence device instance for a BladeSystem device: The following is the cluster.conf entry for the fence_hpblade device:
[ "ccs -f cluster.conf --addfencedev hpbladetest1 agent=fence_hpblade cmd_prompt=c7000oa> ipaddr=192.168.0.1 login=root passwd=password123 missing_as_off=on power_wait=60", "<fencedevices> <fencedevice agent=\"fence_hpblade\" cmd_prompt=\"c7000oa>\" ipaddr=\"hpbladeaddr\" ipport=\"13456\" login=\"root\" missing_as_off=\"on\" name=\"hpbladetest1\" passwd=\"password123\" passwd_script=\"hpbladepwscr\" power_wait=\"60\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-hpblade-ca
Chapter 3. The pcs Command Line Interface
Chapter 3. The pcs Command Line Interface The pcs command line interface controls and configures corosync and Pacemaker by providing an interface to the corosync.conf and cib.xml files. The general format of the pcs command is as follows. 3.1. The pcs Commands The pcs commands are as follows. cluster Configure cluster options and nodes. For information on the pcs cluster command, see Chapter 4, Cluster Creation and Administration . resource Create and manage cluster resources. For information on the pcs cluster command, see Chapter 6, Configuring Cluster Resources , Chapter 8, Managing Cluster Resources , and Chapter 9, Advanced Configuration . stonith Configure fence devices for use with Pacemaker. For information on the pcs stonith command, see Chapter 5, Fencing: Configuring STONITH . constraint Manage resource constraints. For information on the pcs constraint command, see Chapter 7, Resource Constraints . property Set Pacemaker properties. For information on setting properties with the pcs property command, see Chapter 12, Pacemaker Cluster Properties . status View current cluster and resource status. For information on the pcs status command, see Section 3.5, "Displaying Status" . config Display complete cluster configuration in user readable form. For information on the pcs config command, see Section 3.6, "Displaying the Full Cluster Configuration" .
[ "pcs [-f file ] [-h] [ commands ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-pcscommand-haar
Managing access and permissions
Managing access and permissions Red Hat Quay 3.13 Managing access and permissions Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/managing_access_and_permissions/index
11.4. Preparation for IBM Power Systems servers
11.4. Preparation for IBM Power Systems servers Important Ensure that the real-base boot parameter is set to c00000 , otherwise you might see errors such as: IBM Power Systems servers offer many options for partitioning, virtual or native devices, and consoles. If you are using a non-partitioned system, you do not need any pre-installation setup. For systems using the HVSI serial console, hook up your console to the T2 serial port. If using a partitioned system the steps to create the partition and start the installation are largely the same. You should create the partition at the HMC and assign some CPU and memory resources, as well as SCSI and Ethernet resources, which can be either virtual or native. The HMC create partition wizard steps you through the creation. For more information on creating the partition, refer to the Partitioning for Linux with an HMC PDF in the IBM Systems Hardware Information Center at: http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/topic/iphbi_p5/iphbibook.pdf If you are using virtual SCSI resources, rather than native SCSI, you must configure a 'link' to the virtual SCSI serving partition, and then configure the virtual SCSI serving partition itself. You create a 'link' between the virtual SCSI client and server slots using the HMC. You can configure a virtual SCSI server on either Virtual I/O Server (VIOS) or IBM i, depending on which model and options you have. If you are installing using Intel iSCSI Remote Boot, all attached iSCSI storage devices must be disabled. Otherwise, the installation will succeed but the installed system will not boot. For more information on using virtual devices, see the IBM Redbooks publication Virtualizing an Infrastructure with System p and Linux at: http://publib-b.boulder.ibm.com/abstracts/sg247499.html Once you have your system configured, you need to Activate from the HMC or power it on. Depending on what type of install you are doing, you may need to configure SMS to correctly boot the system into the installation program.
[ "DEFAULT CATCH!, exception-handler=fff00300" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch11s04
Chapter 4. General Updates
Chapter 4. General Updates In-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 An in-place upgrade offers a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. To perform an in-place upgrade, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Migration_Planning_Guide/chap-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Upgrading.html and https://access.redhat.com/solutions/637583 . Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the Red Hat Enterprise Linux 6 Extras channel, see https://access.redhat.com/support/policy/updates/extras . (BZ#1432080) cloud-init moved to the Base channel As of Red Hat Enterprise Linux 7.4, the cloud-init package and its dependencies have been moved from the Red Hat Common channel to the Base channel. Cloud-init is a tool that handles early initialization of a system using metadata provided by the environment. It is typically used to configure servers booting in a cloud environment, such as OpenStack or Amazon Web Services. Note that the cloud-init package has not been updated since the latest version provided through the Red Hat Common channel. (BZ#1427280)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_general_updates
Part II. Managing IP Networking
Part II. Managing IP Networking This documentation part provides detailed instruction on how to configure and manage networking in Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/part-managing_ip_networking
Chapter 90. TgzArtifact schema reference
Chapter 90. TgzArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. boolean type Must be tgz . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-tgzartifact-reference
D.20. Status View
D.20. Status View To open Teiid Designer's Status View , click the main menu's Window > Show View > Other... and then click the Teiid Designer > Status view in the dialog. The Status View provides a quick overview status of the selected project. A sample Status view for a project is shown below: Figure D.32. Status View The status view is broken down into common project areas: Source Connections - all Source Connections are fully defined. Sources - Source Models exist. XML Schema - XML Schemas exist. Views - View Models exist. VDBs - VDBs exist and are deployable. Model Validation (Status) - all Models pass validation. Test - all defined VDBs pass validation. The status of each area is denoted by an icon: A green check indicates OK, a red x indicates errors and a warning icon indicates potential problems. The project can be changed by selecting the Change Project button.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/status_view
Chapter 9. Message delivery
Chapter 9. Message delivery 9.1. Sending messages To send a message, override the on_sendable event handler and call the Sender.send() method. The sendable event fires when the Sender has enough credit to send at least one message. Example: Sending messages class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect("amqp://example.com") sender = event.container.create_sender(conn, "jobs") def on_sendable(self, event): message = Message("job-content") event.sender.send(message) For more information, see the send.py example . 9.2. Tracking sent messages When a message is sent, the sender can keep a reference to the delivery object representing the transfer. After the message is delivered, the receiver accepts or rejects it. The sender is notified of the outcome for each delivery. To monitor the outcome of a sent message, override the on_accepted and on_rejected event handlers and map the delivery state update to the delivery returned from send() . Example: Tracking sent messages def on_sendable(self, event): message = Message(self.message_body) delivery = event.sender.send(message) def on_accepted(self, event): print("Delivery", event.delivery , "is accepted") def on_rejected(self, event): print("Delivery", event.delivery , "is rejected") 9.3. Receiving messages To receive a message, create a receiver and override the on_message event handler. Example: Receiving messages class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect("amqp://example.com") receiver = event.container.create_receiver(conn, "jobs") def on_message(self, event): print("Received message", event.message , "from", event.receiver ) For more information, see the receive.py example . 9.4. Acknowledging received messages To explicitly accept or reject a delivery, use the Delivery.update() method with the ACCEPTED or REJECTED state in the on_message event handler. Example: Acknowledging received messages def on_message(self, event): try: process_message(event.message) event.delivery.update(ACCEPTED) except: event.delivery.update(REJECTED) By default, if you do not explicity acknowledge a delivery, then the library accepts it after on_message returns. To disable this behavior, set the auto_accept receiver option to false.
[ "class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect(\"amqp://example.com\") sender = event.container.create_sender(conn, \"jobs\") def on_sendable(self, event): message = Message(\"job-content\") event.sender.send(message)", "def on_sendable(self, event): message = Message(self.message_body) delivery = event.sender.send(message) def on_accepted(self, event): print(\"Delivery\", event.delivery , \"is accepted\") def on_rejected(self, event): print(\"Delivery\", event.delivery , \"is rejected\")", "class ExampleHandler(MessagingHandler): def on_start(self, event): conn = event.container.connect(\"amqp://example.com\") receiver = event.container.create_receiver(conn, \"jobs\") def on_message(self, event): print(\"Received message\", event.message , \"from\", event.receiver )", "def on_message(self, event): try: process_message(event.message) event.delivery.update(ACCEPTED) except: event.delivery.update(REJECTED)" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/message_delivery
Chapter 3. Configuration fields
Chapter 3. Configuration fields This section describes the both required and optional configuration fields when deploying Red Hat Quay. 3.1. Required configuration fields The fields required to configure Red Hat Quay are covered in the following sections: General required fields Storage for images Database for metadata Redis for build logs and user events Tag expiration options 3.2. Automation options The following sections describe the available automation options for Red Hat Quay deployments: Pre-configuring Red Hat Quay for automation Using the API to create the first user 3.3. Optional configuration fields Optional fields for Red Hat Quay can be found in the following sections: Basic configuration SSL LDAP Repository mirroring Quota management Security scanner Helm Action log Build logs Dockerfile build OAuth Configuring nested repositories Adding other OCI media types to Quay Mail User Recaptcha ACI JWT App tokens Miscellaneous User interface v2 IPv6 configuration field Legacy options 3.4. General required fields The following table describes the required configuration fields for a Red Hat Quay deployment: Table 3.1. General required fields Field Type Description AUTHENTICATION_TYPE (Required) String The authentication engine to use for credential authentication. Values: One of Database , LDAP , JWT , Keystone , OIDC Default: Database PREFERRED_URL_SCHEME (Required) String The URL scheme to use when accessing Red Hat Quay. Values: One of http , https Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme. Example: quay-server.example.com DATABASE_SECRET_KEY (Required) String Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. This value is set automatically by the Red Hat Quay Operator for Operator-based deployments. For standalone deployments, administrators can provide their own key using Open SSL or a similar tool. Key length should not exceed 63 characters. SECRET_KEY (Required) String Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. SETUP_COMPLETE (Required) Boolean This is an artifact left over from earlier versions of the software and currently it must be specified with a value of true . 3.5. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 3.5.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 3.2. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 3.5.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 3.3. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 3.5.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 3.4. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 3.5.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 3.6. Image storage This section details the image storage features and configuration fields that are available with Red Hat Quay. 3.6.1. Image storage features The following table describes the image storage features for Red Hat Quay: Table 3.5. Storage config features Field Type Description FEATURE_REPO_MIRROR Boolean If set to true, enables repository mirroring. Default: false FEATURE_PROXY_STORAGE Boolean Whether to proxy all direct download URLs in storage through NGINX. Default: false FEATURE_STORAGE_REPLICATION Boolean Whether to automatically replicate between storage engines. Default: false 3.6.2. Image storage configuration fields The following table describes the image storage configuration fields for Red Hat Quay: Table 3.6. Storage config fields Field Type Description DISTRIBUTED_STORAGE_CONFIG (Required) Object Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. Default: [] DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS (Required) Array of string The list of storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) whose images should be fully replicated, by default, to all other storage engines. DISTRIBUTED_STORAGE_PREFERENCE (Required) Array of string The preferred storage engine(s) (by ID in DISTRIBUTED_STORAGE_CONFIG ) to use. A preferred engine means it is first checked for pulling and images are pushed to it. Default: false MAXIMUM_LAYER_SIZE String Maximum allowed size of an image layer. Pattern : ^[0-9]+(G|M)USD Example : 100G Default: 20G 3.6.3. Local storage The following YAML shows a sample configuration using local storage: DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default 3.6.4. OpenShift Container Storage/NooBaa The following YAML shows a sample configuration using an OpenShift Container Storage/NooBaa instance: DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100 1 server_side_assembly: true 2 1 Defines the maximum chunk size, in MB, for the final copy. Has no effect if server_side_assembly is set to false . 2 Optional. Whether Red Hat Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to true . 3.6.5. Ceph Object Gateway/RadosGW storage The following YAML shows a sample configuration using Ceph/RadosGW. Note RadosGW is an on-premises S3-compatible storage solution. Note that this differs from general AWS S3Storage , which is specifically designed for use with Amazon Web Services S3. This means that RadosGW implements the S3 API and requires credentials like access_key , secret_key , and bucket_name . For more information about Ceph Object Gateway and the S3 API, see Ceph Object Gateway and the S3 API . RadosGW with general s3 access DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' secret_key: <secret_key_here> storage_path: /datastorage/registry maximum_chunk_size_mb: 100 2 server_side_assembly: true 3 1 Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) s3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage". 2 Optional. Defines the maximum chunk size in MB for the final copy. Has no effect if server_side_assembly is set to false . 3 Optional. Whether Red Hat Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to true . 3.6.6. AWS S3 storage The following YAML shows a sample configuration using AWS S3 storage. # ... DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket s3_region: <region> 2 storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default # ... 1 The S3Storage storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". 2 Optional. The Amazon Web Services region. Defaults to us-east-1 . 3.6.6.1. AWS STS S3 storage The following YAML shows an example configuration for using Amazon Web Services (AWS) Security Token Service (STS) with Red Hat Quay on OpenShift Container Platform configurations. # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> storage_path: <storage_path> sts_user_access_key: <s3_user_access_key> 2 sts_user_secret_key: <s3_user_secret_key> 3 s3_region: <region> 4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default # ... 1 The unique Amazon Resource Name (ARN). 2 The generated AWS S3 user access key. 3 The generated AWS S3 user secret key. 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 3.6.7. Google Cloud Storage The following YAML shows a sample configuration using Google Cloud Storage: DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry boto_timeout: 120 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage 1 Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is 60 seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is 60 seconds. 3.6.8. Azure Storage The following YAML shows a sample configuration using Azure Storage: DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage 1 The endpoint_url parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, the endpoint_url will connect to the normal Azure region. As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error: AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary . 3.6.9. Swift storage The following YAML shows a sample configuration using Swift storage: DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage 3.6.10. Nutanix object storage The following YAML shows a sample configuration using Nutanix object storage. DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage 3.6.11. IBM Cloud object storage The following YAML shows a sample configuration using IBM Cloud object storage. DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage #actual driver - access_key: <access_key_here> #parameters secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default 1 Optional. Recommended to be set to 100mb . 3.6.12. NetApp ONTAP S3 object storage The following YAML shows a sample configuration using NetApp ONTAP S3. DISTRIBUTED_STORAGE_CONFIG: local_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <host_url_address> is_secure: true port: <port> secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us 3.7. Redis configuration fields This section details the configuration fields available for Redis deployments. 3.7.1. Build logs The following build logs configuration fields are available for Redis deployments: Table 3.7. Build logs configuration Field Type Description BUILDLOGS_REDIS (Required) Object Redis connection details for build logs caching. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl (Optional) Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. 3.7.2. User events The following user event fields are available for Redis deployments: Table 3.8. User events config Field Type Description USER_EVENTS_REDIS (Required) Object Redis connection details for user event handling. .host (Required) String The hostname at which Redis is accessible. Example: quay-server.example.com .port (Required) Number The port at which Redis is accessible. Example: 6379 .password String The password to connect to the Redis instance. Example: strongpassword .ssl Boolean Whether to enable TLS communication between Redis and Quay. Defaults to false. .ssl_keyfile (Optional) String The name of the key database file, which houses the client certificate to be used. Example: ssl_keyfile: /path/to/server/privatekey.pem .ssl_certfile (Optional) String Used for specifying the file path of the SSL certificate. Example: ssl_certfile: /path/to/server/certificate.pem .ssl_cert_reqs (Optional) String Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. Example: ssl_cert_reqs: CERT_REQUIRED .ssl_ca_certs (Optional) String Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. Example: ssl_ca_certs: /path/to/ca_certs.pem .ssl_ca_data (Optional) String Used to specify a string containing the trusted CA certificates in PEM format. Example: ssl_ca_data: <certificate> .ssl_check_hostname (Optional) Boolean Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server's SSL/TLS certificate matches the hostname of the server it is connecting to. Example: ssl_check_hostname: true 3.7.3. Example Redis configuration The following YAML shows a sample configuration using Redis with optional SSL/TLS fields: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate> Note If your deployment uses Azure Cache for Redis and ssl is set to true , the port defaults to 6380 . 3.8. ModelCache configuration options The following options are available on Red Hat Quay for configuring ModelCache. 3.8.1. Memcache configuration option Memcache is the default ModelCache configuration option. With Memcache, no additional configuration is necessary. 3.8.2. Single Redis configuration option The following configuration is for a single Redis instance with optional read-only replicas: DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > 3.8.3. Clustered Redis configuration option Use the following configuration for a clustered Redis instance: DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false > 3.9. Tag expiration configuration fields The following tag expiration configuration fields are available with Red Hat Quay: Table 3.9. Tag expiration configuration fields Field Type Description FEATURE_GARBAGE_COLLECTION Boolean Whether garbage collection of repositories is enabled. Default: True TAG_EXPIRATION_OPTIONS (Required) Array of string If enabled, the options that users can select for expiration of tags in their namespace. Pattern: ^[0-9]+(y|w|m|d|h|s)USD DEFAULT_TAG_EXPIRATION (Required) String The default, configurable tag expiration time for time machine. Pattern: ^[0-9]+(w|m|d|h|s)USD Default: 2w FEATURE_CHANGE_TAG_EXPIRATION Boolean Whether users and organizations are allowed to change the tag expiration for tags in their namespace. Default: True FEATURE_AUTO_PRUNE Boolean When set to True , enables functionality related to the auto-pruning of tags. Default: False NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES Integer The interval, in minutes, that defines the frequency to re-run notifications for expiring images. Default: 300 DEFAULT_NAMESPACE_AUTOPRUNE_POLICY Object The default organization-wide auto-prune policy. .method: number_of_tags Object The option specifying the number of tags to keep. .value: <integer> Integer When used with method: number_of_tags , denotes the number of tags to keep. For example, to keep two tags, specify 2 . .creation_date Object The option specifying the duration of which to keep tags. .value: <integer> Integer When used with creation_date , denotes how long to keep tags. Can be set to seconds ( s ), days ( d ), months ( m ), weeks ( w ), or years ( y ). Must include a valid integer. For example, to keep tags for one year, specify 1y . AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD Integer The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds. 3.9.1. Example tag expiration configuration The following YAML example shows you a sample tag expiration configuration. # ... DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w - 3y # ... 3.9.2. Registry-wide auto-prune policies examples The following YAML examples show you registry-wide auto-pruning examples by both number of tags and creation date. Example registry auto-prune policy by number of tags # ... DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 10 1 # ... 1 In this scenario, ten tags remain. Example registry auto-prune policy by creation date # ... DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 1y # ... 3.10. Quota management configuration fields Table 3.10. Quota management configuration Field Type Description FEATURE_QUOTA_MANAGEMENT Boolean Enables configuration, caching, and validation for quota management feature. DEFAULT_SYSTEM_REJECT_QUOTA_BYTES String Enables system default quota reject byte allowance for all organizations. By default, no limit is set. QUOTA_BACKFILL Boolean Enables the quota backfill worker to calculate the size of pre-existing blobs. Default : True QUOTA_TOTAL_DELAY_SECONDS String The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete. Default : 1800 PERMANENTLY_DELETE_TAGS Boolean Enables functionality related to the removal of tags from the time machine window. Default : False RESET_CHILD_MANIFEST_EXPIRATION Boolean Resets the expirations of temporary tags targeting the child manifests. With this feature set to True , child manifests are immediately garbage collected. Default : False 3.10.1. Example quota management configuration The following YAML is the suggested configuration when enabling quota management. Quota management YAML configuration FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true 3.11. Proxy cache configuration fields Table 3.11. Proxy configuration Field Type Description FEATURE_PROXY_CACHE Boolean Enables Red Hat Quay to act as a pull through cache for upstream registries. Default : false 3.12. Robot account configuration fields Table 3.12. Robot account configuration fields Field Type Description ROBOTS_DISALLOW Boolean When set to true , robot accounts are prevented from all interactions, as well as from being created Default : False 3.13. Pre-configuring Red Hat Quay for automation Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface. 3.13.1. Allowing the API to create the first user To create the first user, users need to set the FEATURE_USER_INITIALIZE parameter to true and call the /api/v1/user/initialize API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication. Users can use the API to create a user such as quayadmin after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user . 3.13.2. Enabling general API access Users should set the BROWSER_API_CALLS_XHR_ONLY configuration option to false to allow general access to the Red Hat Quay registry API. 3.13.3. Adding a superuser After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER configuration object. For example: # ... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin # ... 3.13.4. Restricting user creation After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION to false . For example: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.13.5. Enabling new functionality in Red Hat Quay 3.12 To use new Red Hat Quay 3.12 functions, enable some or all of the following features: # ... FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false # ... 3.13.6. Suggested configuration for automation The following config.yaml parameters are suggested for automation: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.13.7. Deploying the Red Hat Quay Operator using the initial configuration Use the following procedure to deploy Red Hat Quay on OpenShift Container Platform using the initial configuration. Prerequisites You have installed the oc CLI. Procedure Create a secret using the configuration file: USD oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret Create a quayregistry.yaml file. Identify the unmanaged components and reference the created secret, for example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret Deploy the Red Hat Quay registry: USD oc create -n quay-enterprise -f quayregistry.yaml Steps Using the API to create the first user 3.13.8. Using the API to create the first user Use the following procedure to create the first user in your Red Hat Quay organization. Prerequisites The config option FEATURE_USER_INITIALIZE must be set to true . No users can already exist in the database. Procedure This procedure requests an OAuth token by specifying "access_token": true . Open your Red Hat Quay configuration file and update the following configuration fields: FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin Stop the Red Hat Quay service by entering the following command: USD sudo podman stop quay Start the Red Hat Quay service by entering the following command: USD sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv} Run the following CURL command to generate a new user with a username, password, email, and access token: USD curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "[email protected]", "access_token": true}' If successful, the command returns an object with the username, email, and encrypted password. For example: {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"[email protected]","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow If a user already exists in the database, an error is returned: {"message":"Cannot initialize user in a non-empty database"} If your password is not at least eight characters or contains whitespace, an error is returned: {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."} Log in to your Red Hat Quay deployment by entering the following command: USD sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false Example output Login Succeeded! 3.13.8.1. Using the OAuth token After invoking the API, you can call out the rest of the Red Hat Quay API by specifying the returned OAuth code. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. Procedure Obtain the list of current users by entering the following command: USD curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/ Example output: { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] } In this instance, the details for the quayadmin user are returned as it is the only user that has been created so far. 3.13.8.2. Using the API to create an organization The following procedure details how to use the API to create a Red Hat Quay organization. Prerequisites You have invoked the /api/v1/user/initialize API, and passed in the username, password, and email address. You have called out the rest of the Red Hat Quay API by specifying the returned OAuth code. Procedure To create an organization, use a POST call to api/v1/organization/ endpoint: USD curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "[email protected]"}' Example output: "Created" You can retrieve the details of the organization you created by entering the following command: USD curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg Example output: { "name": "testorg", "email": "[email protected]", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true } 3.14. Basic configuration fields Table 3.13. Basic configuration Field Type Description REGISTRY_TITLE String If specified, the long-form title for the registry. Displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters. Default: Red Hat Quay REGISTRY_TITLE_SHORT String If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization's Tutorial page. Default: Red Hat Quay CONTACT_INFO Array of String If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. [0] String Adds a link to send an e-mail. Pattern: ^mailto:(.)+USD Example: mailto:[email protected] [1] String Adds a link to visit an IRC chat room. Pattern: ^irc://(.)+USD Example: irc://chat.freenode.net:6665/quay [2] String Adds a link to call a phone number. Pattern: ^tel:(.)+USD Example: tel:+1-888-930-3475 [3] String Adds a link to a defined URL. Pattern: ^http(s)?://(.)+USD Example: https://twitter.com/quayio 3.15. SSL configuration fields Table 3.14. SSL configuration Field Type Description PREFERRED_URL_SCHEME String One of http or https . Note that users only set their PREFERRED_URL_SCHEME to http when there is no TLS encryption in the communication path from the client to Quay. Users must set their PREFERRED_URL_SCHEME`to `https when using a TLS-terminating load balancer, a reverse proxy (for example, Nginx), or when using Quay with custom SSL certificates directly. In most cases, the PREFERRED_URL_SCHEME should be https . Default: http SERVER_HOSTNAME (Required) String The URL at which Red Hat Quay is accessible, without the scheme Example: quay-server.example.com SSL_CIPHERS Array of String If specified, the nginx-defined list of SSL ciphers to enabled and disabled Example: [ ECDHE-RSA-AES128-GCM-SHA256 , ECDHE-ECDSA-AES128-GCM-SHA256 , ECDHE-RSA-AES256-GCM-SHA384 , ECDHE-ECDSA-AES256-GCM-SHA384 , DHE-RSA-AES128-GCM-SHA256 , DHE-DSS-AES128-GCM-SHA256 , kEDH+AESGCM , ECDHE-RSA-AES128-SHA256 , ECDHE-ECDSA-AES128-SHA256 , ECDHE-RSA-AES128-SHA , ECDHE-ECDSA-AES128-SHA , ECDHE-RSA-AES256-SHA384 , ECDHE-ECDSA-AES256-SHA384 , ECDHE-RSA-AES256-SHA , ECDHE-ECDSA-AES256-SHA , DHE-RSA-AES128-SHA256 , DHE-RSA-AES128-SHA , DHE-DSS-AES128-SHA256 , DHE-RSA-AES256-SHA256 , DHE-DSS-AES256-SHA , DHE-DSS-AES256-SHA , AES128-GCM-SHA256 , AES256-GCM-SHA384 , AES128-SHA256 , AES256-SHA256 , AES128-SHA , AES256-SHA , AES , !3DES" , !aNULL , !eNULL , !EXPORT , DES , !RC4 , MD5 , !PSK , !aECDH , !EDH-DSS-DES-CBC3-SHA , !EDH-RSA-DES-CBC3-SHA , !KRB5-DES-CBC3-SHA ] SSL_PROTOCOLS Array of String If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Red Hat Quay startup. Example: ['TLSv1','TLSv1.1','TLSv1.2', `TLSv1.3 ]` SESSION_COOKIE_SECURE Boolean Whether the secure property should be set on session cookies Default: False Recommendation: Set to True for all installations using SSL 3.15.1. Configuring SSL Copy the certificate file and primary key file to your configuration directory, ensuring they are named ssl.cert and ssl.key respectively: Edit the config.yaml file and specify that you want Quay to handle TLS: config.yaml ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ... Stop the Quay container and restart the registry 3.16. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 3.16.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 3.17. LDAP configuration fields Table 3.15. LDAP configuration Field Type Description AUTHENTICATION_TYPE (Required) String Must be set to LDAP . FEATURE_TEAM_SYNCING Boolean Whether to allow for team membership to be synced from a backing group in the authentication engine (OIDC, LDAP, or Keystone). Default: true FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP Boolean If enabled, non-superusers can setup team syncrhonization. Default: false LDAP_ADMIN_DN String The admin DN for LDAP authentication. LDAP_ADMIN_PASSWD String The admin password for LDAP authentication. LDAP_ALLOW_INSECURE_FALLBACK Boolean Whether or not to allow SSL insecure fallback for LDAP authentication. LDAP_BASE_DN Array of String The base DN for LDAP authentication. LDAP_EMAIL_ATTR String The email attribute for LDAP authentication. LDAP_UID_ATTR String The uid attribute for LDAP authentication. LDAP_URI String The LDAP URI. LDAP_USER_FILTER String The user filter for LDAP authentication. LDAP_USER_RDN Array of String The user RDN for LDAP authentication. LDAP_SECONDARY_USER_RDNS Array of String Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. TEAM_RESYNC_STALE_TIME String If team syncing is enabled for a team, how often to check its membership and resync if necessary. Pattern: ^[0-9]+(w|m|d|h|s)USD Example: 2h Default: 30m LDAP_SUPERUSER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as superusers when Red Hat Quay uses LDAP as its authentication provider. With this field, administrators can add or remove superusers without having to update the Red Hat Quay configuration file and restart their deployment. This field requires that your AUTHENTICATION_TYPE is set to LDAP . GLOBAL_READONLY_SUPER_USERS String When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the LDAP_SUPERUSER_FILTER configuration field. LDAP_RESTRICTED_USER_FILTER String Subset of the LDAP_USER_FILTER configuration field. When configured, allows Red Hat Quay administrators the ability to configure Lightweight Directory Access Protocol (LDAP) users as restricted users when Red Hat Quay uses LDAP as its authentication provider. This field requires that your AUTHENTICATION_TYPE is set to LDAP . FEATURE_RESTRICTED_USERS Boolean When set to True with LDAP_RESTRICTED_USER_FILTER active, only the listed users in the defined LDAP group are restricted. Default: False LDAP_TIMEOUT Integer Specifies the time limit, in seconds, for LDAP operations. This limits the amount of time an LDAP search, bind, or other operation can take. Similar to the -l option in ldapsearch , it sets a client-side operation timeout. Default: 10 LDAP_NETWORK_TIMEOUT Integer Specifies the time limit, in seconds, for establishing a connection to the LDAP server. This is the maximum time Red Hat Quay waits for a response during network operations, similar to the -o nettimeout option in ldapsearch . Default: 10 3.17.1. LDAP configuration references Use the following references to update your config.yaml file with the desired LDAP settings. 3.17.1.1. Basic LDAP configuration Use the following reference for a basic LDAP configuration. --- AUTHENTICATION_TYPE: LDAP 1 --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com 2 LDAP_ADMIN_PASSWD: ABC123 3 LDAP_ALLOW_INSECURE_FALLBACK: false 4 LDAP_BASE_DN: 5 - dc=example - dc=com LDAP_EMAIL_ATTR: mail 6 LDAP_UID_ATTR: uid 7 LDAP_URI: ldap://<example_url>.com 8 LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) 9 LDAP_USER_RDN: 10 - ou=people LDAP_SECONDARY_USER_RDNS: 11 - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four> 1 Required. Must be set to LDAP . 2 Required. The admin DN for LDAP authentication. 3 Required. The admin password for LDAP authentication. 4 Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication. 5 Required. The base DN for LDAP authentication. 6 Required. The email attribute for LDAP authentication. 7 Required. The UID attribute for LDAP authentication. 8 Required. The LDAP URI. 9 Required. The user filter for LDAP authentication. 10 Required. The user RDN for LDAP authentication. 11 Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. 3.17.1.2. LDAP restricted user configuration Use the following reference for an LDAP restricted user configuration. # ... AUTHENTICATION_TYPE: LDAP # ... FEATURE_RESTRICTED_USERS: true 1 # ... LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) 2 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com # ... 1 Must be set to true when configuring an LDAP restricted user. 2 Configures specified users as restricted users. 3.17.1.3. LDAP superuser configuration reference Use the following reference for an LDAP superuser configuration. # ... AUTHENTICATION_TYPE: LDAP # ... LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) 1 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com # ... 1 Configures specified users as superusers. 3.18. Mirroring configuration fields Table 3.16. Mirroring configuration Field Type Description FEATURE_REPO_MIRROR Boolean Enable or disable repository mirroring Default: false REPO_MIRROR_INTERVAL Number The number of seconds between checking for repository mirror candidates Default: 30 REPO_MIRROR_SERVER_HOSTNAME String Replaces the SERVER_HOSTNAME as the destination for mirroring. Default: None Example : openshift-quay-service REPO_MIRROR_TLS_VERIFY Boolean Require HTTPS and verify certificates of Quay registry during mirror. Default: false REPO_MIRROR_ROLLBACK Boolean When set to true , the repository rolls back after a failed mirror attempt. Default : false 3.19. Security scanner configuration fields Table 3.17. Security scanner configuration Field Type Description FEATURE_SECURITY_SCANNER Boolean Enable or disable the security scanner Default: false FEATURE_SECURITY_NOTIFICATIONS Boolean If the security scanner is enabled, turn on or turn off security notifications Default: false SECURITY_SCANNER_V4_REINDEX_THRESHOLD String This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the last_indexed datetime in the manifestsecuritystatus table. This parameter is used to avoid trying to re-index every failed manifest on every indexing run. The default time to re-index is 300 seconds. SECURITY_SCANNER_V4_ENDPOINT String The endpoint for the V4 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.101:6060 SECURITY_SCANNER_V4_PSK String The generated pre-shared key (PSK) for Clair SECURITY_SCANNER_ENDPOINT String The endpoint for the V2 security scanner Pattern: ^http(s)?://(.)+USD Example: http://192.168.99.100:6060 SECURITY_SCANNER_INDEXING_INTERVAL Integer This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Red Hat Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. Default: 30 FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX Boolean Whether to allow sending notifications about vulnerabilities for new pushes. Default : True SECURITY_SCANNER_V4_MANIFEST_CLEANUP Boolean Whether the Red Hat Quay garbage collector removes manifests that are not referenced by other tags or manifests. Default : True NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX String Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to High . Available options include Critical , High , Medium , Low , Negligible , and Unknown . SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE String The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Red Hat Quay UI returns the following message: The manifest for this tag has layer(s) that are too large to index by the Quay Security Scanner . The default is 8G , and the maximum recommended is 10G . Accepted values are B , K , M , T , and G . Default : 8G 3.19.1. Re-indexing with Clair v4 When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine ( /indexer/api/v1/index_state ) to determine whether the scanner configuration has been changed. Red Hat Quay leverages this index state by saving it to the index report when parsing to Quay's database. If this state has changed since the manifest was previously scanned, Red Hat Quay will attempt to re-index that manifest during the periodic indexing process. By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Red Hat Quay database. 3.19.2. Example security scanner configuration The following YAML is the suggested configuration when enabling the security scanner feature. Security scanner YAML configuration FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true ... SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G 1 ... 1 Recommended maximum is 10G . 3.20. Helm configuration fields Table 3.18. Helm configuration fields Field Type Description FEATURE_GENERAL_OCI_SUPPORT Boolean Enable support for OCI artifacts. Default: True The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field: Field Media Type Supported content types Helm application/vnd.cncf.helm.config.v1+json application/tar+gzip , application/vnd.cncf.helm.chart.content.v1.tar+gzip Cosign application/vnd.oci.image.config.v1+json application/vnd.dev.cosign.simplesigning.v1+json , application/vnd.dsse.envelope.v1+json SPDX application/vnd.oci.image.config.v1+json text/spdx , text/spdx+xml , text/spdx+json Syft application/vnd.oci.image.config.v1+json application/vnd.syft+json CycloneDX application/vnd.oci.image.config.v1+json application/vnd.cyclonedx , application/vnd.cyclonedx+xml , application/vnd.cyclonedx+json In-toto application/vnd.oci.image.config.v1+json application/vnd.in-toto+json Unknown application/vnd.cncf.openpolicyagent.policy.layer.v1+rego application/vnd.cncf.openpolicyagent.policy.layer.v1+rego , application/vnd.cncf.openpolicyagent.data.layer.v1+json 3.20.1. Configuring Helm The following YAML is the example configuration when enabling Helm. Helm YAML configuration FEATURE_GENERAL_OCI_SUPPORT: true 3.21. Open Container Initiative configuration fields Table 3.19. Additional OCI artifact configuration field Field Type Description FEATURE_REFERRERS_API Boolean Enables OCI 1.1's referrers API. Example OCI referrers enablement YAML # ... FEATURE_REFERRERS_API: True # ... 3.22. Unknown media types Table 3.20. Unknown media types configuration field Field Type Description IGNORE_UNKNOWN_MEDIATYPES Boolean When enabled, allows a container registry platform to disregard specific restrictions on supported artifact types and accept any unrecognized or unknown media types. Default: false 3.22.1. Configuring unknown media types The following YAML is the example configuration when enabling unknown or unrecognized media types. Unknown media types YAML configuration IGNORE_UNKNOWN_MEDIATYPES: true 3.23. Action log configuration fields 3.23.1. Action log storage configuration Table 3.21. Action log storage configuration Field Type Description FEATURE_LOG_EXPORT Boolean Whether to allow exporting of action logs. Default: True LOGS_MODEL String Specifies the preferred method for handling log data. Values: One of database , transition_reads_both_writes_es , elasticsearch , splunk Default: database LOGS_MODEL_CONFIG Object Logs model config for action logs. ALLOW_WITHOUT_STRICT_LOGGING Boolean When set to True , if the external log system like Splunk or ElasticSearch is intermittently unavailable, allows users to push images normally. Events are logged to the stdout instead. Overrides ALLOW_PULLS_WITHOUT_STRICT_LOGGING if set. Default: False 3.23.1.1. Elasticsearch configuration fields The following fields are available when configuring Elasticsearch for Red Hat Quay. LOGS_MODEL_CONFIG [object]: Logs model config for action logs. elasticsearch_config [object]: Elasticsearch cluster configuration. access_key [string]: Elasticsearch user (or IAM key for AWS ES). Example : some_string host [string]: Elasticsearch cluster endpoint. Example : host.elasticsearch.example index_prefix [string]: Elasticsearch's index prefix. Example : logentry_ index_settings [object]: Elasticsearch's index settings use_ssl [boolean]: Use ssl for Elasticsearch. Defaults to True . Example : True secret_key [string]: Elasticsearch password (or IAM secret for AWS ES). Example : some_secret_string aws_region [string]: Amazon web service region. Example : us-east-1 port [number]: Elasticsearch cluster endpoint port. Example : 1234 kinesis_stream_config [object]: AWS Kinesis Stream configuration. aws_secret_key [string]: AWS secret key. Example : some_secret_key stream_name [string]: Kinesis stream to send action logs to. Example : logentry-kinesis-stream aws_access_key [string]: AWS access key. Example : some_access_key retries [number]: Max number of attempts made on a single request. Example : 5 read_timeout [number]: Number of seconds before timeout when reading from a connection. Example : 5 max_pool_connections [number]: The maximum number of connections to keep in a connection pool. Example : 10 aws_region [string]: AWS region. Example : us-east-1 connect_timeout [number]: Number of seconds before timeout when attempting to make a connection. Example : 5 producer [string]: Logs producer if logging to Elasticsearch. enum : kafka, elasticsearch, kinesis_stream Example : kafka kafka_config [object]: Kafka cluster configuration. topic [string]: Kafka topic to publish log entries to. Example : logentry bootstrap_servers [array]: List of Kafka brokers to bootstrap the client from. max_block_seconds [number]: Max number of seconds to block during a send() , either because the buffer is full or metadata unavailable. Example : 10 3.23.1.2. Splunk configuration fields The following fields are available when configuring Splunk for Red Hat Quay. producer [string]: splunk . Use when configuring Splunk. splunk_config [object]: Logs model configuration for Splunk action logs or the Splunk cluster configuration. host [string]: Splunk cluster endpoint. port [integer]: Splunk management cluster endpoint port. bearer_token [string]: The bearer token for Splunk. verify_ssl [boolean]: Enable ( True ) or disable ( False ) TLS/SSL verification for HTTPS connections. index_prefix [string]: Splunk's index prefix. ssl_ca_path [string]: The relative container path to a single .pem file containing a certificate authority (CA) for SSL validation. Example Splunk configuration # ... LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb port: 8089 bearer_token: <bearer_token> url_scheme: <http/https> verify_ssl: False index_prefix: <splunk_log_index_name> ssl_ca_path: <location_to_ssl-ca-cert.pem> # ... 3.23.1.3. Splunk HEC configuration fields The following fields are available when configuring Splunk HTTP Event Collector (HEC) for Red Hat Quay. producer [string]: splunk_hec . Use when configuring Splunk HEC. splunk_hec_config [object]: Logs model configuration for Splunk HTTP event collector action logs configuration. host [string]: Splunk cluster endpoint. port [integer]: Splunk management cluster endpoint port. hec_token [string]: HEC token for Splunk. url_scheme [string]: The URL scheme for access the Splunk service. If Splunk is behind SSL/TLS, must be https . verify_ssl [boolean]: Enable ( true ) or disable ( false ) SSL/TLS verification for HTTPS connections. index [string]: The Splunk index to use. splunk_host [string]: The host name to log this event. splunk_sourcetype [string]: The name of the Splunk sourcetype to use. # ... LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk_hec splunk_hec_config: 1 host: prd-p-aaaaaq.splunkcloud.com 2 port: 8088 3 hec_token: 12345678-1234-1234-1234-1234567890ab 4 url_scheme: https 5 verify_ssl: False 6 index: quay 7 splunk_host: quay-dev 8 splunk_sourcetype: quay_logs 9 # ... 3.23.2. Action log rotation and archiving configuration Table 3.22. Action log rotation and archiving configuration Field Type Description FEATURE_ACTION_LOG_ROTATION Boolean Enabling log rotation and archival will move all logs older than 30 days to storage. Default: false ACTION_LOG_ARCHIVE_LOCATION String If action log archiving is enabled, the storage engine in which to place the archived data. Example: : s3_us_east ACTION_LOG_ARCHIVE_PATH String If action log archiving is enabled, the path in storage in which to place the archived data. Example: archives/actionlogs ACTION_LOG_ROTATION_THRESHOLD String The time interval after which to rotate logs. Example: 30d 3.23.3. Action log audit configuration Table 3.23. Audit logs configuration field Field Type Description ACTION_LOG_AUDIT_LOGINS Boolean When set to True , tracks advanced events such as logging into, and out of, the UI, and logging in using Docker for regular users, robot accounts, and for application-specific token accounts. Default: True 3.24. Build logs configuration fields Table 3.24. Build logs configuration fields Field Type Description FEATURE_READER_BUILD_LOGS Boolean If set to true, build logs can be read by those with read access to the repository, rather than only write access or admin access. Default: False LOG_ARCHIVE_LOCATION String The storage location, defined in DISTRIBUTED_STORAGE_CONFIG , in which to place the archived build logs. Example: s3_us_east LOG_ARCHIVE_PATH String The path under the configured storage engine in which to place the archived build logs in .JSON format. Example: archives/buildlogs 3.25. Dockerfile build triggers fields Table 3.25. Dockerfile build support Field Type Description FEATURE_BUILD_SUPPORT Boolean Whether to support Dockerfile build. Default: False SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD Number If not set to None , the number of successive failures that can occur before a build trigger is automatically disabled. Default: 100 SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD Number If not set to None , the number of successive internal errors that can occur before a build trigger is automatically disabled Default: 5 3.25.1. GitHub build triggers Table 3.26. GitHub build triggers Field Type Description FEATURE_GITHUB_BUILD Boolean Whether to support GitHub build triggers. Default: False GITHUB_TRIGGER_CONFIG Object Configuration for using GitHub Enterprise for build triggers. .GITHUB_ENDPOINT (Required) String The endpoint for GitHub Enterprise. Example: https://github.com/ .API_ENDPOINT String The endpoint of the GitHub Enterprise API to use. Must be overridden for github.com . Example : https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; this cannot be shared with GITHUB_LOGIN_CONFIG . .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.25.2. BitBucket build triggers Table 3.27. BitBucket build triggers Field Type Description FEATURE_BITBUCKET_BUILD Boolean Whether to support Bitbucket build triggers. Default: False BITBUCKET_TRIGGER_CONFIG Object Configuration for using BitBucket for build triggers. .CONSUMER_KEY (Required) String The registered consumer key (client ID) for this Red Hat Quay instance. .CONSUMER_SECRET (Required) String The registered consumer secret (client secret) for this Red Hat Quay instance. 3.25.3. GitLab build triggers Table 3.28. GitLab build triggers Field Type Description FEATURE_GITLAB_BUILD Boolean Whether to support GitLab build triggers. Default: False GITLAB_TRIGGER_CONFIG Object Configuration for using Gitlab for build triggers. .GITLAB_ENDPOINT (Required) String The endpoint at which Gitlab Enterprise is running. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. 3.26. Build manager configuration fields Table 3.29. Build manager configuration fields Field Type Description ALLOWED_WORKER_COUNT String Defines how many Build Workers are instantiated per Red Hat Quay pod. Typically set to 1 . ORCHESTRATOR_PREFIX String Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. REDIS_HOST Object The hostname for your Redis service. REDIS_PASSWORD String The password to authenticate into your Redis service. REDIS_SSL Boolean Defines whether or not your Redis connection uses SSL/TLS. REDIS_SKIP_KEYSPACE_EVENT_SETUP Boolean By default, Red Hat Quay does not set up the keyspace events required for key events at runtime. To do so, set REDIS_SKIP_KEYSPACE_EVENT_SETUP to false . EXECUTOR String Starts a definition of an Executor of this type. Valid values are kubernetes and ec2 . BUILDER_NAMESPACE String Kubernetes namespace where Red Hat Quay Builds will take place. K8S_API_SERVER Object Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. K8S_API_TLS_CA Object The filepath in the Quay container of the Build cluster's CA certificate for the Quay application to trust when making API calls. KUBERNETES_DISTRIBUTION String Indicates which type of Kubernetes is being used. Valid values are openshift and k8s . CONTAINER_ * Object Define the resource requests and limits for each build pod. NODE_SELECTOR_ * Object Defines the node selector label name-value pair where build Pods should be scheduled. CONTAINER_RUNTIME Object Specifies whether the Builder should run docker or podman . Customers using Red Hat's quay-builder image should set this to podman . SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN Object Defines the Service Account name or token that will be used by build pods. QUAY_USERNAME/QUAY_PASSWORD Object Defines the registry credentials needed to pull the Red Hat Quay build worker image that is specified in the WORKER_IMAGE field. Customers should provide a Red Hat Service Account credential as defined in the section "Creating Registry Service Accounts" against registry.redhat.io in the article at https://access.redhat.com/RegistryAuthentication . WORKER_IMAGE Object Image reference for the Red Hat Quay Builder image. registry.redhat.io/quay/quay-builder WORKER_TAG Object Tag for the Builder image desired. The latest version is 3.12. BUILDER_VM_CONTAINER_IMAGE Object The full reference to the container image holding the internal VM needed to run each Red Hat Quay Build. ( registry.redhat.io/quay/quay-builder-qemu-rhcos:3.12 ). SETUP_TIME String Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at 500 seconds. Builds that time out are attempted to be restarted three times. If the Build does not register itself after three attempts it is considered failed. MINIMUM_RETRY_THRESHOLD String This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to 0 means there are no restrictions on how many tries the build job needs to have. This value should be kept intentionally small (three or less) to ensure failovers happen quickly during infrastructure failures. You must specify a value for this setting. For example, Kubernetes is set as the first executor and EC2 as the second executor. If you want the last attempt to run a job to always be executed on EC2 and not Kubernetes, you can set the Kubernetes executor's MINIMUM_RETRY_THRESHOLD to 1 and EC2's MINIMUM_RETRY_THRESHOLD to 0 (defaults to 0 if not set). In this case, the Kubernetes' MINIMUM_RETRY_THRESHOLD retries_remaining(1) would evaluate to False , therefore falling back to the second executor configured. SSH_AUTHORIZED_KEYS Object List of SSH keys to bootstrap in the ignition config. This allows other keys to be used to SSH into the EC2 instance or QEMU virtual machine (VM). 3.27. OAuth configuration fields Table 3.30. OAuth fields Field Type Description DIRECT_OAUTH_CLIENTID_WHITELIST Array of String A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. FEATURE_ASSIGN_OAUTH_TOKEN Boolean Allows organization administrators to assign OAuth tokens to other users. 3.27.1. GitHub OAuth configuration fields Table 3.31. GitHub OAuth fields Field Type Description FEATURE_GITHUB_LOGIN Boolean Whether GitHub login is supported **Default: False GITHUB_LOGIN_CONFIG Object Configuration for using GitHub (Enterprise) as an external login provider. .ALLOWED_ORGANIZATIONS Array of String The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. .API_ENDPOINT String The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com Example: https://api.github.com/ .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance; cannot be shared with GITHUB_TRIGGER_CONFIG . Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .GITHUB_ENDPOINT (Required) String The endpoint for GitHub (Enterprise). Example : https://github.com/ .ORG_RESTRICT Boolean If true, only users within the organization whitelist can login using this provider. 3.27.2. Google OAuth configuration fields Table 3.32. Google OAuth fields Field Type Description FEATURE_GOOGLE_LOGIN Boolean Whether Google login is supported. **Default: False GOOGLE_LOGIN_CONFIG Object Configuration for using Google for external authentication. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 3.28. OIDC configuration fields Table 3.33. OIDC fields Field Type Description <string>_LOGIN_CONFIG (Required) String The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, AZURE_LOGIN_CONFIG , however any arbitrary string is accepted. .CLIENT_ID (Required) String The registered client ID for this Red Hat Quay instance. Example: 0e8dbe15c4c7630b6780 .CLIENT_SECRET (Required) String The registered client secret for this Red Hat Quay instance. Example: e4a58ddd3d7408b7aec109e85564a0d153d3e846 .DEBUGLOG Boolean Whether to enable debugging. .LOGIN_BINDING_FIELD String Used when the internal authorization is set to LDAP. Red Hat Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. .LOGIN_SCOPES Object Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. .OIDC_ENDPOINT_CUSTOM_PARAMS String Support for custom query parameters on OIDC endpoints. The following endpoints are supported: authorization_endpoint , token_endpoint , and user_endpoint . .OIDC_ISSUER String Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as iss which defines who issued the token. By default, this is read from the .well-know/openid/configuration endpoint, which is exposed by every OIDC provider. If this verification fails, there is no login. .OIDC_SERVER (Required) String The address of the OIDC server that is being used for authentication. Example: https://sts.windows.net/6c878... / .PREFERRED_USERNAME_CLAIM_NAME String Sets the preferred username to a parameter from the token. .SERVICE_ICON String Changes the icon on the login screen. .SERVICE_NAME (Required) String The name of the service that is being authenticated. Example: Microsoft Entra ID .VERIFIED_EMAIL_CLAIM_NAME String The name of the claim that is used to verify the email address of the user. .PREFERRED_GROUP_CLAIM_NAME String The key name within the OIDC token payload that holds information about the user's group memberships. .OIDC_DISABLE_USER_ENDPOINT Boolean Whether to allow or disable the /userinfo endpoint. If using Azure Entra ID, this field must be set to true because Azure obtains the user's information from the token instead of calling the /userinfo endpoint. Default: false 3.28.1. OIDC configuration The following example shows a sample OIDC configuration. Example OIDC configuration AUTHENTICATION_TYPE: OIDC # ... AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Microsoft Entra ID VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_DISABLE_USER_ENDPOINT: true OIDC_ENDPOINT_CUSTOM_PARAMS": "authorization_endpoint": "some": "param", # ... 3.29. Nested repositories configuration fields Support for nested repository path names has been added under the FEATURE_EXTENDED_REPOSITORY_NAMES property. This optional configuration is added to the config.yaml by default. Enablement allows the use of / in repository names. Table 3.34. OCI and nested repositories configuration fields Field Type Description FEATURE_EXTENDED_REPOSITORY_NAMES Boolean Enable support for nested repositories Default: True OCI and nested repositories configuration example FEATURE_EXTENDED_REPOSITORY_NAMES: true 3.30. QuayIntegration configuration fields The following configuration fields are available for the QuayIntegration custom resource: Name Description Schema allowlistNamespaces (Optional) A list of namespaces to include. Array clusterID (Required) The ID associated with this cluster. String credentialsSecret.key (Required) The secret containing credentials to communicate with the Quay registry. Object denylistNamespaces (Optional) A list of namespaces to exclude. Array insecureRegistry (Optional) Whether to skip TLS verification to the Quay registry Boolean quayHostname (Required) The hostname of the Quay registry. String scheduledImageStreamImport (Optional) Whether to enable image stream importing. Boolean 3.31. Mail configuration fields Table 3.35. Mail configuration fields Field Type Description FEATURE_MAILING Boolean Whether emails are enabled Default: False MAIL_DEFAULT_SENDER String If specified, the e-mail address used as the from when Red Hat Quay sends e-mails. If none, defaults to [email protected] Example: [email protected] MAIL_PASSWORD String The SMTP password to use when sending e-mails MAIL_PORT Number The SMTP port to use. If not specified, defaults to 587. MAIL_SERVER String The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. Example: smtp.example.com MAIL_USERNAME String The SMTP username to use when sending e-mails MAIL_USE_TLS Boolean If specified, whether to use TLS for sending e-mails Default: True 3.32. User configuration fields Table 3.36. User configuration fields Field Type Description FEATURE_SUPER_USERS Boolean Whether superusers are supported Default: true FEATURE_USER_CREATION Boolean Whether users can be created (by non-superusers) Default: true FEATURE_USER_LAST_ACCESSED Boolean Whether to record the last time a user was accessed Default: true FEATURE_USER_LOG_ACCESS Boolean If set to true, users will have access to audit logs for their namespace Default: false FEATURE_USER_METADATA Boolean Whether to collect and support user metadata Default: false FEATURE_USERNAME_CONFIRMATION Boolean If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP. Default: true FEATURE_USER_RENAME Boolean If set to true, users can rename their own namespace Default: false FEATURE_INVITE_ONLY_USER_CREATION Boolean Whether users being created must be invited by another user Default: false FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example : 5m USERFILES_LOCATION String ID of the storage engine in which to place user-uploaded files Example : s3_us_east USERFILES_PATH String Path under storage in which to place user-uploaded files Example : userfiles USER_RECOVERY_TOKEN_LIFETIME String The length of time a token for recovering a user accounts is valid Pattern : ^[0-9]+(w|m|d|h|s)USD Default : 30m FEATURE_SUPERUSERS_FULL_ACCESS Boolean Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for. Default: False FEATURE_SUPERUSERS_ORG_CREATION_ONLY Boolean Whether to only allow superusers to create organizations. Default: False FEATURE_RESTRICTED_USERS Boolean When set to True with RESTRICTED_USERS_WHITELIST : All normal users and superusers are restricted from creating organizations or content in their own namespace unless they are allowlisted via RESTRICTED_USERS_WHITELIST . Restricted users retain their normal permissions within organizations based on team memberships. Default: False RESTRICTED_USERS_WHITELIST String When set with FEATURE_RESTRICTED_USERS: true , specific users are excluded from the FEATURE_RESTRICTED_USERS setting. GLOBAL_READONLY_SUPER_USERS String When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the SUPER_USERS configuration field. 3.32.1. User configuration fields references Use the following references to update your config.yaml file with the desired configuration field. 3.32.1.1. FEATURE_SUPERUSERS_FULL_ACCESS configuration reference --- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True --- 3.32.1.2. GLOBAL_READONLY_SUPER_USERS configuration reference --- GLOBAL_READONLY_SUPER_USERS: - user1 --- 3.32.1.3. FEATURE_RESTRICTED_USERS configuration reference --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true --- 3.32.1.4. RESTRICTED_USERS_WHITELIST configuration reference Prerequisites FEATURE_RESTRICTED_USERS is set to true in your config.yaml file. --- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 --- Note When this field is set, whitelisted users can create organizations, or read or write content from the repository even if FEATURE_RESTRICTED_USERS is set to true . Other users, for example, user2 , user3 , and user4 are restricted from creating organizations, reading, or writing content 3.33. Recaptcha configuration fields Table 3.37. Recaptcha configuration fields Field Type Description FEATURE_RECAPTCHA Boolean Whether Recaptcha is necessary for user login and recovery Default: False RECAPTCHA_SECRET_KEY String If recaptcha is enabled, the secret key for the Recaptcha service RECAPTCHA_SITE_KEY String If recaptcha is enabled, the site key for the Recaptcha service 3.34. ACI configuration fields Table 3.38. ACI configuration fields Field Type Description FEATURE_ACI_CONVERSION Boolean Whether to enable conversion to ACIs Default: False GPG2_PRIVATE_KEY_FILENAME String The filename of the private key used to decrypte ACIs GPG2_PRIVATE_KEY_NAME String The name of the private key used to sign ACIs GPG2_PUBLIC_KEY_FILENAME String The filename of the public key used to encrypt ACIs 3.35. JWT configuration fields Table 3.39. JWT configuration fields Field Type Description JWT_AUTH_ISSUER String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_GETUSER_ENDPOINT String The endpoint for JWT users Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_QUERY_ENDPOINT String The endpoint for JWT queries Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 JWT_VERIFY_ENDPOINT String The endpoint for JWT verification Pattern : ^http(s)?://(.)+USD Example : http://192.168.99.101:6060 3.36. App tokens configuration fields Table 3.40. App tokens configuration fields Field Type Description FEATURE_APP_SPECIFIC_TOKENS Boolean If enabled, users can create tokens for use by the Docker CLI Default: True APP_SPECIFIC_TOKEN_EXPIRATION String The expiration for external app tokens. Default None Pattern: ^[0-9]+(w|m|d|h|s)USD EXPIRED_APP_SPECIFIC_TOKEN_GC String Duration of time expired external app tokens will remain before being garbage collected Default: 1d 3.37. Miscellaneous configuration fields Table 3.41. Miscellaneous configuration fields Field Type Description ALLOW_PULLS_WITHOUT_STRICT_LOGGING String If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. Default: False AVATAR_KIND String The types of avatars to display, either generated inline (local) or Gravatar (gravatar) Values: local, gravatar BROWSER_API_CALLS_XHR_ONLY Boolean If enabled, only API calls marked as being made by an XHR will be allowed from browsers Default: True DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT Number The default maximum number of builds that can be queued in a namespace. Default: None ENABLE_HEALTH_DEBUG_SECRET String If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser EXTERNAL_TLS_TERMINATION Boolean Set to true if TLS is supported, but terminated at a layer before Quay. Set to false when Quay is running with its own SSL certificates and receiving TLS traffic directly. FRESH_LOGIN_TIMEOUT String The time after which a fresh login requires users to re-enter their password Example: 5m HEALTH_CHECKER String The configured health check Example: ('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'}) PROMETHEUS_NAMESPACE String The prefix applied to all exposed Prometheus metrics Default: quay PUBLIC_NAMESPACES Array of String If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. REGISTRY_STATE String The state of the registry Values: normal or read-only SEARCH_MAX_RESULT_PAGE_COUNT Number Maximum number of pages the user can paginate in search before they are limited Default: 10 SEARCH_RESULTS_PER_PAGE Number Number of results returned per page by search page Default: 10 V2_PAGINATION_SIZE Number The number of results returned per page in V2 registry APIs Default: 50 WEBHOOK_HOSTNAME_BLACKLIST Array of String The set of hostnames to disallow from webhooks when validating, beyond localhost CREATE_PRIVATE_REPO_ON_PUSH Boolean Whether new repositories created by push are set to private visibility Default: True CREATE_NAMESPACE_ON_PUSH Boolean Whether new push to a non-existent organization creates it Default: False NON_RATE_LIMITED_NAMESPACES Array of String If rate limiting has been enabled using FEATURE_RATE_LIMITS , you can override it for specific namespace that require unlimited access. FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. Default: True FEATURE_REQUIRE_TEAM_INVITE Boolean Whether to require invitations when adding a user to a team Default: True FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH Boolean Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth Default: False FEATURE_RATE_LIMITS Boolean Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to true causes nginx to limit certain API calls to 30 per second. If that feature is not set, API calls are limited to 300 per second (effectively unlimited). Default: False FEATURE_FIPS Boolean If set to true, Red Hat Quay will run using FIPS-compliant hash functions Default: False FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL Boolean Whether to allow retrieval of aggregated log counts Default: True FEATURE_ANONYMOUS_ACCESS Boolean Whether to allow anonymous users to browse and pull public repositories Default: True FEATURE_DIRECT_LOGIN Boolean Whether users can directly login to the UI Default: True FEATURE_LIBRARY_SUPPORT Boolean Whether to allow for "namespace-less" repositories when pulling and pushing from Docker Default: True FEATURE_PARTIAL_USER_AUTOCOMPLETE Boolean If set to true, autocompletion will apply to partial usernames+ Default: True FEATURE_PERMANENT_SESSIONS Boolean Whether sessions are permanent Default: True FEATURE_PUBLIC_CATALOG Boolean If set to true, the _catalog endpoint returns public repositories. Otherwise, only private repositories can be returned. Default: False 3.38. Legacy configuration fields The following fields are deprecated or obsolete. Table 3.42. Legacy configuration fields Field Type Description FEATURE_BLACKLISTED_EMAILS Boolean If set to true, no new User accounts may be created if their email domain is blacklisted BLACKLISTED_EMAIL_DOMAINS Array of String The list of email-address domains that is used if FEATURE_BLACKLISTED_EMAILS is set to true Example: "example.com", "example.org" BLACKLIST_V2_SPEC String The Docker CLI versions to which Red Hat Quay will respond that V2 is unsupported Example : <1.8.0 Default: <1.6.0 DOCUMENTATION_ROOT String Root URL for documentation links. This field is useful when Red Hat Quay is configured for disconnected environments to set an alternatively, or allowlisted, documentation link. SECURITY_SCANNER_V4_NAMESPACE_WHITELIST String The namespaces for which the security scanner should be enabled FEATURE_RESTRICTED_V1_PUSH Boolean If set to true, only namespaces listed in V1_PUSH_WHITELIST support V1 push Default: False V1_PUSH_WHITELIST Array of String The array of namespace names that support V1 push if FEATURE_RESTRICTED_V1_PUSH is set to true FEATURE_HELM_OCI_SUPPORT Boolean Enable support for Helm artifacts. Default: False ALLOWED_OCI_ARTIFACT_TYPES Object The set of allowed OCI artifact MIME types and the associated layer types. 3.39. User interface v2 configuration fields Table 3.43. User interface v2 configuration fields Field Type Description FEATURE_UI_V2 Boolean When set, allows users to try the beta UI environment. + Default: False FEATURE_UI_V2_REPO_SETTINGS Boolean When set to True , enables repository settings in the Red Hat Quay v2 UI. + Default: False 3.39.1. v2 user interface configuration With FEATURE_UI_V2 enabled, you can toggle between the current version of the user interface and the new version of the user interface. Important This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags. When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI. There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes. Procedure In your deployment's config.yaml file, add the FEATURE_UI_V2 parameter and set it to true , for example: --- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true --- Log in to your Red Hat Quay deployment. In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI . Click the toggle button to set it to new UI, and then click Use Beta Environment , for example: 3.40. IPv6 configuration field Table 3.44. IPv6 configuration field Field Type Description FEATURE_LISTEN_IP_VERSION String Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Red Hat Quay fails to start. Default: IPv4 Additional configurations: IPv6 , dual-stack 3.41. Branding configuration fields Table 3.45. Branding configuration fields Field Type Description BRANDING Object Custom branding for logos and URLs in the Red Hat Quay UI. .logo (Required) String Main logo image URL. The header logo defaults to 205x30 PX. The form logo on the Red Hat Quay sign in screen of the web UI defaults to 356.5x39.7 PX. Example: /static/img/quay-horizontal-color.svg .footer_img String Logo for UI footer. Defaults to 144x34 PX. Example: /static/img/RedHat.svg .footer_url String Link for footer image. Example: https://redhat.com 3.41.1. Example configuration for Red Hat Quay branding Branding config.yaml example BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/ 3.42. Session timeout configuration field The following configuration field relies on on the Flask API configuration field of the same name. Table 3.46. Session logout configuration field Field Type Description PERMANENT_SESSION_LIFETIME Integer A timedelta which is used to set the expiration date of a permanent session. The default is 31 days, which makes a permanent session survive for roughly one month. Default: 2678400 3.42.1. Example session timeout configuration The following YAML is the suggest configuration when enabling session lifetime. Important Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow. Session timeout YAML configuration PERMANENT_SESSION_LIFETIME: 3000
[ "DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert", "DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert", "DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default", "DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100 1 server_side_assembly: true 2", "DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: 1 - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' secret_key: <secret_key_here> storage_path: /datastorage/registry maximum_chunk_size_mb: 100 2 server_side_assembly: true 3", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage 1 - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket s3_region: <region> 2 storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> storage_path: <storage_path> sts_user_access_key: <s3_user_access_key> 2 sts_user_secret_key: <s3_user_secret_key> 3 s3_region: <region> 4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default", "DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry boto_timeout: 120 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage", "DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage", "DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert\" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage", "DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage", "DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage #actual driver - access_key: <access_key_here> #parameters secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default", "DISTRIBUTED_STORAGE_CONFIG: local_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <host_url_address> is_secure: true port: <port> secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate>", "DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false >", "DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false >", "DEFAULT_TAG_EXPIRATION: 2w TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w - 3y", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: number_of_tags value: 10 1", "DEFAULT_NAMESPACE_AUTOPRUNE_POLICY: method: creation_date value: 1y", "**Default:** `False`", "FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true", "SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false", "FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false", "oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret", "oc create -n quay-enterprise -f quayregistry.yaml", "FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin", "sudo podman stop quay", "sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}", "curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ \"username\": \"quayadmin\", \"password\":\"quaypass12345\", \"email\": \"[email protected]\", \"access_token\": true}'", "{\"access_token\":\"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\", \"email\":\"[email protected]\",\"encrypted_password\":\"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW\",\"username\":\"quayadmin\"} # gitleaks:allow", "{\"message\":\"Cannot initialize user in a non-empty database\"}", "{\"message\":\"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace.\"}", "sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false", "Login Succeeded!", "curl -X GET -k -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/", "{ \"users\": [ { \"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": { \"name\": \"quayadmin\", \"hash\": \"3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c\", \"color\": \"#e7ba52\", \"kind\": \"user\" }, \"super_user\": true, \"enabled\": true } ] }", "curl -X POST -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{\"name\": \"testorg\", \"email\": \"[email protected]\"}'", "\"Created\"", "curl -X GET -k --header 'Content-Type: application/json' -H \"Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED\" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg", "{ \"name\": \"testorg\", \"email\": \"[email protected]\", \"avatar\": { \"name\": \"testorg\", \"hash\": \"5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8\", \"color\": \"#a55194\", \"kind\": \"user\" }, \"is_admin\": true, \"is_member\": true, \"teams\": { \"owners\": { \"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": { \"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\" }, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false } }, \"ordered_teams\": [ \"owners\" ], \"invoice_email\": false, \"invoice_email_address\": null, \"tag_expiration_s\": 1209600, \"is_free_account\": true }", "cp ~/ssl.cert USDQUAY/config cp ~/ssl.key USDQUAY/config cd USDQUAY/config", "SERVER_HOSTNAME: quay-server.example.com PREFERRED_URL_SCHEME: https", "cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----", "mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt", "sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.12.8 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller", "sudo podman restart 5a3e82c4a75f", "sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV", "--- AUTHENTICATION_TYPE: LDAP 1 --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com 2 LDAP_ADMIN_PASSWD: ABC123 3 LDAP_ALLOW_INSECURE_FALLBACK: false 4 LDAP_BASE_DN: 5 - dc=example - dc=com LDAP_EMAIL_ATTR: mail 6 LDAP_UID_ATTR: uid 7 LDAP_URI: ldap://<example_url>.com 8 LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) 9 LDAP_USER_RDN: 10 - ou=people LDAP_SECONDARY_USER_RDNS: 11 - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four>", "AUTHENTICATION_TYPE: LDAP FEATURE_RESTRICTED_USERS: true 1 LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) 2 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "AUTHENTICATION_TYPE: LDAP LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) 1 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com", "FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G 1", "FEATURE_GENERAL_OCI_SUPPORT: true", "FEATURE_REFERRERS_API: True", "IGNORE_UNKNOWN_MEDIATYPES: true", "LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb port: 8089 bearer_token: <bearer_token> url_scheme: <http/https> verify_ssl: False index_prefix: <splunk_log_index_name> ssl_ca_path: <location_to_ssl-ca-cert.pem>", "LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk_hec splunk_hec_config: 1 host: prd-p-aaaaaq.splunkcloud.com 2 port: 8088 3 hec_token: 12345678-1234-1234-1234-1234567890ab 4 url_scheme: https 5 verify_ssl: False 6 index: quay 7 splunk_host: quay-dev 8 splunk_sourcetype: quay_logs 9", "AUTHENTICATION_TYPE: OIDC AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Microsoft Entra ID VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_DISABLE_USER_ENDPOINT: true OIDC_ENDPOINT_CUSTOM_PARAMS\": \"authorization_endpoint\": \"some\": \"param\",", "FEATURE_EXTENDED_REPOSITORY_NAMES: true", "--- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True ---", "--- GLOBAL_READONLY_SUPER_USERS: - user1 ---", "--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true ---", "--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 ---", "--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---", "BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/", "PERMANENT_SESSION_LIFETIME: 3000" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/configure_red_hat_quay/config-fields-intro
Chapter 6. Migrating virtual machines from VMware vSphere
Chapter 6. Migrating virtual machines from VMware vSphere 6.1. Adding a VMware vSphere source provider You can migrate VMware vSphere VMs from VMware vCenter or from a VMWare ESX/ESXi server. In MTV versions 2.6 and later, you can migrate directly from an ESX/ESXi server, without going through vCenter, by specifying the SDK endpoint to that of an ESX/ESXi server. Important EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Migration Toolkit for Virtualization but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines . Note If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan . Prerequisites It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, then please retry with VDDK installed. For more information, see Creating a VDDK image . Warning Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN. Procedure In the Red Hat OpenShift web console, click Migration Providers for virtualization . Click Create Provider . Click vSphere . Specify the following fields: Provider details Provider resource name : Name of the source provider. Endpoint type : Select the vSphere provider endpoint type. Options: vCenter or ESXi . You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter. URL : URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the sdk path, usually /sdk . For example, https://vCenter-host-example.com/sdk . If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate. VDDK init image : VDDKInitImage path. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image . Provider credentials Username : vCenter user or ESXi user. For example, [email protected] . Password : vCenter user password or ESXi user password. Choose one of the following options for validating CA certificates: Use a custom CA certificate : Migrate after validating a custom CA certificate. Use the system CA certificate : Migrate after validating the system CA certificate. Skip certificate validation : Migrate without validating a CA certificate. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select . To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty. To skip certificate validation, toggle the Skip certificate validation switch to the right. Optional: Ask MTV to fetch a custom CA certificate from the provider's API endpoint URL. Click Fetch certificate from URL . The Verify certificate window opens. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm . If not, click Cancel , and then, enter the correct certificate information manually. Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint. Click Create provider to add and save the provider. The provider appears in the list of providers. Note It might take a few minutes for the provider to have the status Ready . Optional: Add access to the UI of the provider: On the Providers page, click the provider. The Provider details page opens. Click the Edit icon under External UI web link . Enter the link and click Save . Note If you do not enter a link, MTV attempts to calculate the correct link. If MTV succeeds, the hyperlink of the field points to the calculated link. If MTV does not succeed, the field remains empty. 6.2. Selecting a migration network for a VMware source provider You can select a migration network in the Red Hat OpenShift web console for a source provider to reduce risk to the source environment and to improve performance. Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network. Note You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere. Note If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan . Prerequisites The migration network must have sufficient throughput, minimum speed of 10 Gbps, for disk transfer. The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway. Note The source virtual disks are copied by a pod that is connected to the pod network of the target namespace. The migration network should have jumbo frames enabled. Procedure In the Red Hat OpenShift web console, click Migration Providers for virtualization . Click the host number in the Hosts column beside a provider to view a list of hosts. Select one or more hosts and click Select migration network . Specify the following fields: Network : Network name ESXi host admin username : For example, root ESXi host admin password : Password Click Save . Verify that the status of each host is Ready . If a host status is not Ready , the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes. 6.3. Adding an OpenShift Virtualization destination provider You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider. Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider. You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on. Prerequisites You must have an OpenShift Virtualization service account token with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, click Migration Providers for virtualization . Click Create Provider . Click OpenShift Virtualization . Specify the following fields: Provider resource name : Name of the source provider URL : URL of the endpoint of the API server Service account bearer token : Token for a service account with cluster-admin privileges If both URL and Service account bearer token are left blank, the local OpenShift cluster is used. Choose one of the following options for validating CA certificates: Use a custom CA certificate : Migrate after validating a custom CA certificate. Use the system CA certificate : Migrate after validating the system CA certificate. Skip certificate validation : Migrate without validating a CA certificate. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select . To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty. To skip certificate validation, toggle the Skip certificate validation switch to the right. Optional: Ask MTV to fetch a custom CA certificate from the provider's API endpoint URL. Click Fetch certificate from URL . The Verify certificate window opens. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm . If not, click Cancel , and then, enter the correct certificate information manually. Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint. Click Create provider to add and save the provider. The provider appears in the list of providers. 6.4. Selecting a migration network for an OpenShift Virtualization provider You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured. If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer. Note You can override the default migration network of the provider by selecting a different network when you create a migration plan. Procedure In the Red Hat OpenShift web console, click Migration > Providers for virtualization . Click the OpenShift Virtualization provider whose migration network you want to change. When the Providers detail page opens: Click the Networks tab. Click Set default transfer network . Select a default transfer network from the list and click Save . 6.5. Creating a migration plan Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details. Warning Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. Important A plan cannot contain more than 500 VMs or 500 disks. Procedure In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan . The Create migration plan wizard opens to the Select source provider interface. Select the source provider of the VMs you want to migrate. The Select virtual machines interface opens. Select the VMs you want to migrate and click . The Create migration plan pane opens. It displays the source provider's name and suggestions for a target provider and namespace, a network map, and a storage map. Enter the Plan name . To change the Target provider , the Target namespace , or elements of the Network map or the Storage map , select an item from the relevant list. To add either a Network map or a Storage map , click the + sign anf add a mapping. Click Create migration plan . MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the page. If you make any changes, MTV validates the plan again. Check the following items in the Settings section of the page: Warm migration : By default, all migrations are cold migrations. For a warm migration, click the Edit icon and select Warm migration . Transfer Network : The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save . You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions . To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform . If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider . Target namespace : Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save . Preserve static IPs : By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP. To avoid this, click the Edit icon to Preserve static IPs and toggle the Whether to preserve the static IPs switch in the window that opens. Then click Save . MTV then issues a warning message about any VMs for which vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere in order for the vNIC properties to be reported to MTV. Disk decryption passphrases : For disks encrypted using Linux Unified Key Setup (LUKS). To enter a list of decryption passphrases for LUKS-encrypted devices, in the Settings section, click the Edit icon to Disk decryption passphrases , enter the passphrases, and then click Save . You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, MTV tries each passphrase until one unlocks the device. Root device : Applies to multi-boot VM migrations only. By default, MTV uses the first bootable device detected as the root device. To specify a different root device, in the Settings section, click the Edit icon to Root device and choose a device from the list of commonly-used options, or enter a device in the text box. MTV uses the following format for disk location: /dev/sd<disk_identifier><disk_partition> . For example, if the second disk is the root device and the operating system is on the disk's second partition, the format would be: /dev/sdb2 . After you enter the boot device, click Save . If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs. Important When you migrate a VMware 7 VM to an OpenShift 4.13+ platform that uses CentOS 7.9, the name of the network interfaces changes and the static IP configuration for the VM no longer works. 6.6. Running a migration plan You can run a migration plan and view its progress in the Red Hat OpenShift web console. Prerequisites Valid migration plan. Procedure In the Red Hat OpenShift web console, click Migration Plans for virtualization . The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan. Click Start beside a migration plan to start the migration. Click Start in the confirmation window that opens. The plan's Status changes to Running , and the migration's progress is displayed. Warm migration only: The precopy stage starts. Click Cutover to complete the migration. Optional: Click the links in the migration's Status to see its overall status and the status of each VM: The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled. The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data: The name of the VM The start and end times of the migration The amount of data copied A progress pipeline for the VM's migration Warning vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption. Optional: To view your migration's logs, either as it is running or after it is completed, perform the following actions: Click the Virtual Machines tab. Click the arrow ( > ) to the left of the virtual machine whose migration progress you want to check. The VM's details are displayed. In the Pods section, in the Pod links column, click the Logs link. The Logs tab opens. Note Logs are not always available. The following are common reasons for logs not being available: The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required. No pod was created. The pod was deleted. The migration failed before running the pod. To see the raw logs, click the Raw link. To download the logs, click the Download link. 6.7. Migration plan options On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu beside a migration plan to access the following options: Edit Plan : Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options: All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs. The plan's mapping on the Mappings tab. The hooks listed on the Hooks tab. Start migration : Active only if relevant. Restart migration : Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan. Cutover : Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options: Set cutover : Set the date and time for a cutover. Remove cutover : Cancel a scheduled cutover. Active only if relevant. Duplicate Plan : Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks: Migrate VMs to a different namespace. Edit an archived migration plan. Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready. Archive Plan : Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted. Note Archive Plan is irreversible. However, you can duplicate an archived plan. Delete Plan : Permanently remove a migration plan. You cannot delete a running migration plan. Note Delete Plan is irreversible. Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it. 6.8. Canceling a migration You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console. Procedure In the Red Hat OpenShift web console, click Plans for virtualization . Click the name of a running migration plan to view the migration details. Select one or more VMs and click Cancel . Click Yes, cancel to confirm the cancellation. In the Migration details by VM list, the status of the canceled VMs is Canceled . The unmigrated and the migrated virtual machines are not affected. You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html/installing_and_using_the_migration_toolkit_for_virtualization/migrating-vmware
Chapter 3. Event-driven APIs
Chapter 3. Event-driven APIs Many of the APIs provided with AMQ Clients are asynchronous, event-driven APIs. These include the C++, JavaScript, Python, and Ruby APIs. These APIs work by executing application event-handling functions in response to network activity. The library monitors network I/O and fires events. The event handlers run sequentially on the main library thread. Because the event handlers run on the main library thread, the handler code must not contain any long-running blocking operations. Blocking in an event handler blocks all library execution. If you need to execute a long blocking operation, you must call it on a separate thread. The event-driven APIs include cross-thread communication facilities to support coordination between the library thread and application threads. Avoid blocking in event handlers Long-running blocking calls in event handlers stop all library execution, preventing the library from handling other events and performing periodic tasks. Always start long-running blocking procedures in a separate application thread.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_clients_overview/event_driven_apis
Preface
Preface Red Hat Fuse is a lightweight, flexible integration platform that enables rapid integration across the extended enterprise-on-premise or in the cloud. Based on Apache Camel, Fuse leverages pattern-based integration, a rich connector catalog, and extensive data transformation capabilities to enables users to integrate anything.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_apache_karaf/pr01
Preface
Preface Cost management helps you monitor and analyze your OpenShift Container Platform and Public cloud costs to improve the management of your business. It is based on the upstream project Koku. To get started, learn about the following topics: What you can do with cost management and why your organization might want to use it How to set up and configure cost management How to adjust your settings after setup How to use cost management You can use cost management to track cost and usage data for your Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Oracle Cloud, and OpenShift Container Platform environments.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_cost_management/pr01
F.3. Running Additional Programs at Boot Time
F.3. Running Additional Programs at Boot Time The /etc/rc.d/rc.local script is executed by the init command at boot time or when changing runlevels. Adding commands to the bottom of this script is an easy way to perform necessary tasks like starting special services or initialize devices without writing complex initialization scripts in the /etc/rc.d/init.d/ directory and creating symbolic links. The /etc/rc.serial script is used if serial ports must be setup at boot time. This script runs setserial commands to configure the system's serial ports. Refer to the setserial man page for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-boot-init-shutdown-run-boot
Chapter 50. Virtualization
Chapter 50. Virtualization USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7. (BZ#1103193) Select Intel network adapters now support SR-IOV as a guest on Hyper-V In this update for Red Hat Enterprise Linux guest virtual machines running on Hyper-V, a new PCI passthrough driver adds the ability to use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf driver. This ability is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2016. (BZ#1348508) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) virt-v2v can now use vmx configuration files to convert VMware guests As a Technology Preview, the virt-v2v utility now includes the vmx input mode, which enables the user to convert a guest virtual machine from a VMware vmx configuration file. Note that to do this, you also need access to the corresponding VMware storage, for example by mounting the storage using NFS. It is also possible to access the storage using SSH, by adding the -it ssh parameter. (BZ# 1441197 , BZ# 1523767 ) virt-v2v can convert Debian and Ubuntu guests As a technology preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion: virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and thus requires manual configuration. (BZ# 1387213 ) Virtio devices can now use vIOMMU As a Technology Preview, this update enables virtio devices to use virtual Input/Output Memory Management Unit (vIOMMU). This guarantees the security of Direct Memory Access (DMA) by allowing the device to DMA only to permitted addresses. However, note that only guest virtual machines using Red Hat Enterprise Linux 7.4 or later are able to use this feature. (BZ# 1283251 , BZ#1464891) virt-v2v converts VMWare guests faster and more reliably As a Technology Preview, the virt-v2v utility can now use the VMWare Virtual Disk Development Kit (VDDK) to import a VMWare guest virtual machine to a KVM guest. This enables virt-v2v to connect directly to the VMWare ESXi hypervisor, which improves the speed and reliability of the conversion. Note that this conversion import method requires the external nbdkit utility and its VDDK plug-in. (BZ#1477912) Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. (BZ#653382) GPU-based mediated devices now support the VNC console As a Technology Preview, the Virtual Network Computing (VNC) console is now available for use with GPU-based mediated devices, such as the NVIDIA vGPU technology. As a result, it is now possible to use these mediated devices for real-time rendering of a virtual machine's graphical output. (BZ# 1475770 , BZ#1470154, BZ#1555246) Azure M416v2 as a host for RHEL 7 guests As a Technology Preview, the Azure M416v2 instance type can now be used as a host for virtual machines that use RHEL 7.6 and later as the guest operating systems. (BZ#1661654)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/technology_previews_virtualization
7.167. openssh
7.167. openssh 7.167.1. RHSA-2013:0519 - Moderate: openssh security, bug fix and enhancement update Updated openssh packages that fix one security issue, multiple bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. OpenSSH is OpenBSD's Secure Shell (SSH) protocol implementation. These packages include the core files necessary for the OpenSSH client and server. Security Fix CVE-2012-5536 Due to the way the pam_ssh_agent_auth PAM module was built in Red Hat Enterprise Linux 6, the glibc's error() function was called rather than the intended error() function in pam_ssh_agent_auth to report errors. As these two functions expect different arguments, it was possible for an attacker to cause an application using pam_ssh_agent_auth to crash, disclose portions of its memory or, potentially, execute arbitrary code. Note Note that the pam_ssh_agent_auth module is not used in Red Hat Enterprise Linux 6 by default. Bug Fixes BZ# 821641 All possible options for the new RequiredAuthentications directive were not documented in the sshd_config man page. This update improves the man page to document all the possible options. BZ# 826720 When stopping one instance of the SSH daemon (sshd), the sshd init script (/etc/rc.d/init.d/sshd) stopped all sshd processes regardless of the PID of the processes. This update improves the init script so that it only kills processes with the relevant PID. As a result, the init script now works more reliably in a multi-instance environment. BZ# 836650 Due to a regression, the ssh-copy-id command returned an exit status code of zero even if there was an error in copying the key to a remote host. With this update, a patch has been applied and ssh-copy-id now returns a non-zero exit code if there is an error in copying the SSH certificate to a remote host. BZ#836655 When SELinux was disabled on the system, no on-disk policy was installed, a user account was used for a connection, and no "~/.ssh" configuration was present in that user's home directory, the SSH client terminated unexpectedly with a segmentation fault when attempting to connect to another system. A patch has been provided to address this issue and the crashes no longer occur in the described scenario. BZ# 857760 The "HOWTO" document /usr/share/doc/openssh-ldap-5.3p1/HOWTO.ldap-keys incorrectly documented the use of the AuthorizedKeysCommand directive. This update corrects the document. Enhancements BZ#782912 When attempting to enable SSH for use with a Common Access Card (CAC), the ssh-agent utility read all the certificates in the card even though only the ID certificate was needed. Consequently, if a user entered their PIN incorrectly, then the CAC was locked, as a match for the PIN was attempted against all three certificates. With this update, ssh-add does not try the same PIN for every certificate if the PIN fails for the first one. As a result, the CAC will not be disabled if a user enters their PIN incorrectly. BZ#860809 This update adds a "netcat mode" to SSH. The "ssh -W host:port ..." command connects standard input and output (stdio) on a client to a single port on a server. As a result, SSH can be used to route connections via intermediate servers. BZ# 869903 Due to a bug, arguments for the RequiredAuthentications2 directive were not stored in a Match block. Consequently, parsing of the config file was not in accordance with the man sshd_config documentation. This update fixes the bug and users can now use the required authentication feature to specify a list of authentication methods as expected according to the man page. All users of openssh are advised to upgrade to these updated packages, which fix these issues and add these enhancements. After installing this update, the OpenSSH server daemon (sshd) will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/openssh
Chapter 8. Fixed issues
Chapter 8. Fixed issues The issues fixed in Streams for Apache Kafka 2.7 on OpenShift. For details of the issues fixed in Kafka 3.7.0, refer to the Kafka 3.7.0 Release Notes. Table 8.1. Fixed issues Issue Number Description ENTMQST-5839 OAuth issue fix: fallbackUsernamePrefix had no effect ENTMQST-5820 MM2 connector task fails for Oauth enabled clusters due to unwanted quotes in the resulting sasl.jaas.config values. ENTMQST-5754 Avoid unnecessary patching when resources are being deleted ENTMQST-5753 Producing with different embedded formats across multiple HTTP requests isn't honoured ENTMQST-5656 Cruise Control not restarted when API secret changes ENTMQST-5603 UseKRaft feature gate promotion to beta ENTMQST-5583 Unidirectional Topic Operator seems to cause disruption when upgrading from BTO ENTMQST-5582 Cruise control topics should use proper configuration to make smooth rolling updates possible and ensure availability ENTMQST-5581 Unidirectional Topic Operator needs to use log levels in a better way ENTMQST-5546 KafkaRoller is strugling to transition controller-only nodes to mixed nodes ENTMQST-5540 Fix handling of connector state for MirrorMaker 2 connectors ENTMQST-5511 KRaft node rolling, liveness and readiness ENTMQST-5504 Add support for Kafka and Strimzi upgrades when KRaft is enabled ENTMQST-5492 Fix handling of advertised listeners for controller nodes ENTMQST-5387 Promote the StableConnectIdentities feature gate to GA ENTMQST-5383 Support for Tiered Storage with custom "bring-your-own" plugins ENTMQST-5360 Additional tasks for Unidirectional Topic Operator (2.7.0 Edition) ENTMQST-5292 Deal with issues around Cluster ID validation when recovering from existing PVs / PVCs ENTMQST-4194 Topic Operator allows user to set forbidden settings ENTMQST-4164 Persistent error on periodic reconciliation of internal topics ENTMQST-4087 Topic Operator fails when doing bulk topic deletion ENTMQST-3970 The internal Kafka Connect topics are recreated with invalid configuration ENTMQST-3994 ZooKeeper to KRaft migration ENTMQST-3974 Topic Operator fixes ENTMQST-3886 The state store, topic-store, may have migrated to another instance Table 8.2. Fixed common vulnerabilities and exposures (CVEs) Issue Number Description ENTMQST-5886 CVE-2023-43642 flaw was found in SnappyInputStream in snappy-java ENTMQST-5885 CVE-2023-52428 Nimbus JOSE+JWT before 9.37.2 ENTMQST-5884 CVE-2022-4899 vulnerability was found in zstd v1.4.10 ENTMQST-5883 CVE-2021-24032 flaw was found in zstd ENTMQST-5882 CVE-2024-23944 Apache ZooKeeper: Information disclosure in persistent watcher handling ENTMQST-5881 CVE-2021-3520 a flaw in lz4 ENTMQST-5835 CVE-2024-29025 netty-codec-http: Allocation of Resources Without Limits or Throttling ENTMQST-5646 CVE-2024-1023 vert.x: io.vertx/vertx-core: memory leak due to the use of Netty FastThreadLocal data structures in Vertx ENTMQST-5667 CVE-2024-1300 vertx-core: io.vertx:vertx-core: memory leak when a TCP server is configured with TLS and SNI support
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_openshift/fixed-issues-str
3.10. The SHOW Statement
3.10. The SHOW Statement The SHOW statement can be used to see a variety of information. The SHOW statement is not yet a language feature of JBoss Data Virtualization and is handled only in the JDBC client. SHOW PLAN SHOW PLAN returns a resultset with a CLOB column PLAN_TEXT, an xml column PLAN_XML, and a CLOB column DEBUG_LOG with a row containing the values from the previously executed query. If SHOWPLAN is OFF or no plan is available, no rows are returned. If SHOWPLAN is not set to DEBUG, then DEBUG_LOG will return a null value. SHOW ANNOTATIONS SHOW ANNOTATIONS returns a resultset with string columns CATEGORY, PRIORITY, ANNOTATION, RESOLUTION and a row for each annotation on the previously executed query. If SHOWPLAN is OFF or no plan is available, no rows are returned. SHOW <property> SHOW <property> is the inverse of SET and shows the property value for the property supplied. It returns a resultset with a single string column with a name matching the property key. SHOW ALL SHOW ALL returns a resultset with a NAME string column and a VALUE string column with a row entry for every property value.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/the_show_statement1
Chapter 6. Joining RHEL systems to an Active Directory by using RHEL system roles
Chapter 6. Joining RHEL systems to an Active Directory by using RHEL system roles If your organization uses Microsoft Active Directory (AD) to centrally manage users, groups, and other resources, you can join your Red Hat Enterprise Linux (RHEL) host to this AD. For example, AD users can then log into RHEL and you can make services on the RHEL host available for authenticated AD users. By using the ad_integration RHEL system role, you can automate the integration of Red Hat Enterprise Linux system into an Active Directory (AD) domain. Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. 6.1. Joining RHEL to an Active Directory domain by using the ad_integration RHEL system role You can use the ad_integration RHEL system role to automate the process of joining RHEL to an Active Directory (AD) domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed node uses a DNS server that can resolve AD DNS entries. Credentials of an AD account which has permissions to join computers to the domain. Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: usr: administrator pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: "{{ usr }}" ad_integration_password: "{{ pwd }}" ad_integration_realm: "ad.example.com" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: "time_server.ad.example.com" The settings specified in the example playbook include the following: ad_integration_allow_rc4_crypto: <true|false> Configures whether the role activates the AD-SUPPORT crypto policy on the managed node. By default, RHEL does not support the weak RC4 encryption but, if Kerberos in your AD still requires RC4, you can enable this encryption type by setting ad_integration_allow_rc4_crypto: true . Omit this the variable or set it to false if Kerberos uses AES encryption. ad_integration_timesync_source: <time_server> Specifies the NTP server to use for time synchronization. Kerberos requires a synchronized time among AD domain controllers and domain members to prevent replay attacks. If you omit this variable, the ad_integration role does not utilize the timesync RHEL system role to configure time synchronization on the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check if AD users, such as administrator , are available locally on the managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file /usr/share/doc/rhel-system-roles/ad_integration/ directory Ansible vault
[ "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "usr: administrator pwd: <password>", "--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/integrating-rhel-systems-into-ad-directly-with-ansible-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1]
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 3.1. Specification Property Type Description aggregationRule object AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .aggregationRule Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 3.1.2. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.3. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterroles DELETE : delete collection of ClusterRole GET : list or watch objects of kind ClusterRole POST : create a ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles GET : watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} GET : watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/rbac.authorization.k8s.io/v1/clusterroles HTTP method DELETE Description delete collection of ClusterRole Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRole Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ClusterRole schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles HTTP method GET Description watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body ClusterRole schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty 3.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method GET Description watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/rbac_apis/clusterrole-rbac-authorization-k8s-io-v1
Chapter 31. Additional resources
Chapter 31. Additional resources Managing and monitoring KIE Server Packaging and deploying an Red Hat Process Automation Manager project
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/additional_resources_2
Chapter 2. Configuring the monitoring stack
Chapter 2. Configuring the monitoring stack The OpenShift Container Platform 4 installation program provides only a low number of configuration options before installation. Configuring most OpenShift Container Platform framework components, including the cluster monitoring stack, happens post-installation. This section explains what configuration is supported, shows how to configure the monitoring stack, and demonstrates several common configuration scenarios. 2.1. Prerequisites The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 2.2. Maintenance and support for monitoring The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this document. Do not use other configurations, as they are unsupported. Configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this section, your changes will disappear because the cluster-monitoring-operator reconciles any differences. The Operator resets everything to the defined state by default and by design. 2.2.1. Support considerations for monitoring The following modifications are explicitly not supported: Creating additional ServiceMonitor , PodMonitor , and PrometheusRule objects in the openshift-* and kube-* projects. Modifying any resources or objects deployed in the openshift-monitoring or openshift-user-workload-monitoring projects. The resources created by the OpenShift Container Platform monitoring stack are not meant to be used by any other resources, as there are no guarantees about their backward compatibility. Note The Alertmanager configuration is deployed as a secret resource in the openshift-monitoring project. To configure additional routes for Alertmanager, you need to decode, modify, and then encode that secret. This procedure is a supported exception to the preceding statement. Modifying resources of the stack. The OpenShift Container Platform monitoring stack ensures its resources are always in the state it expects them to be. If they are modified, the stack will reset them. Deploying user-defined workloads to openshift-* , and kube-* projects. These projects are reserved for Red Hat provided components and they should not be used for user-defined workloads. Modifying the monitoring stack Grafana instance. Installing custom Prometheus instances on OpenShift Container Platform. A custom instance is a Prometheus custom resource (CR) managed by the Prometheus Operator. Enabling symptom based monitoring by using the Probe custom resource definition (CRD) in Prometheus Operator. Modifying Alertmanager configurations by using the AlertmanagerConfig CRD in Prometheus Operator. Note Backward compatibility for metrics, recording rules, or alerting rules is not guaranteed. 2.2.2. Support policy for monitoring Operators Monitoring Operators ensure that OpenShift Container Platform monitoring resources function as designed and tested. If Cluster Version Operator (CVO) control of an Operator is overridden, the Operator does not respond to configuration changes, reconcile the intended state of cluster objects, or receive updates. While overriding CVO control for an Operator can be helpful during debugging, this is unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. Overriding the Cluster Version Operator The spec.overrides parameter can be added to the configuration for the CVO to allow administrators to provide a list of overrides to the behavior of the CVO for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state and prevents the monitoring stack from being reconciled to its intended state. This impacts the reliability features built into Operators and prevents updates from being received. Reported issues must be reproduced after removing any overrides for support to proceed. 2.3. Preparing to configure the monitoring stack You can configure the monitoring stack by creating and updating monitoring config maps. 2.3.1. Creating a cluster monitoring config map To configure core OpenShift Container Platform monitoring components, you must create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. Note When you save your changes to the cluster-monitoring-config ConfigMap object, some or all of the pods in the openshift-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the cluster-monitoring-config ConfigMap object exists: USD oc -n openshift-monitoring get configmap cluster-monitoring-config If the ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called cluster-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f cluster-monitoring-config.yaml 2.3.2. Creating a user-defined workload monitoring config map To configure the components that monitor user-defined projects, you must create the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. Note When you save your changes to the user-workload-monitoring-config ConfigMap object, some or all of the pods in the openshift-user-workload-monitoring project might be redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the config map before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Check whether the user-workload-monitoring-config ConfigMap object exists: USD oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config If the user-workload-monitoring-config ConfigMap object does not exist: Create the following YAML manifest. In this example the file is called user-workload-monitoring-config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | Apply the configuration to create the ConfigMap object: USD oc apply -f user-workload-monitoring-config.yaml Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Additional resources Enabling monitoring for user-defined projects 2.4. Configuring the monitoring stack In OpenShift Container Platform 4.10, you can configure the monitoring stack using the cluster-monitoring-config or user-workload-monitoring-config ConfigMap objects. Config maps configure the Cluster Monitoring Operator (CMO), which in turn configures the components of the stack. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object. To configure core OpenShift Container Platform monitoring components : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration> : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component> Substitute <component> and <configuration_for_the_component> accordingly. The following example ConfigMap object configures a persistent volume claim (PVC) for Prometheus. This relates to the Prometheus instance that monitors core OpenShift Container Platform components only: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi 1 Defines the Prometheus component and the subsequent lines define its configuration. To configure components that monitor user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your configuration under data/config.yaml as a key-value pair <component_name>: <component_configuration> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component> Substitute <component> and <configuration_for_the_component> accordingly. The following example ConfigMap object configures a data retention period and minimum container resource requests for Prometheus. This relates to the Prometheus instance that monitors user-defined projects only: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4 1 Defines the Prometheus component and the subsequent lines define its configuration. 2 Configures a twenty-four hour data retention period for the Prometheus instance that monitors user-defined projects. 3 Defines a minimum resource request of 200 millicores for the Prometheus container. 4 Defines a minimum pod resource request of 2 GiB of memory for the Prometheus container. Note The Prometheus config map component is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object. Save the file to apply the changes to the ConfigMap object. The pods affected by the new configuration are restarted automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects 2.5. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects: Table 2.1. Configurable monitoring components Component cluster-monitoring-config config map key user-workload-monitoring-config config map key Prometheus Operator prometheusOperator prometheusOperator Prometheus prometheusK8s prometheus Alertmanager alertmanagerMain kube-state-metrics kubeStateMetrics openshift-state-metrics openshiftStateMetrics Grafana grafana Telemeter Client telemeterClient Prometheus Adapter k8sPrometheusAdapter Thanos Querier thanosQuerier Thanos Ruler thanosRuler Note The Prometheus key is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object. 2.6. Using node selectors to move monitoring components By using the nodeSelector constraint with labeled nodes, you can move any of the monitoring stack components to specific nodes. By doing so, you can control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and segregate workloads based on specific requirements or policies. 2.6.1. How node selectors work with other constraints If you move monitoring components by using node selector constraints, be aware that other constraints to control pod scheduling might exist for a cluster: Topology spread constraints might be in place to control pod placement. Hard anti-affinity rules are in place for Prometheus, Thanos Querier, Alertmanager, and other monitoring components to ensure that multiple pods for these components are always spread across different nodes and are therefore always highly available. When scheduling pods onto nodes, the pod scheduler tries to satisfy all existing constraints when determining pod placement. That is, all constraints compound when the pod scheduler determines which pods will be placed on which nodes. Therefore, if you configure a node selector constraint but existing constraints cannot all be satisfied, the pod scheduler cannot match all constraints and will not schedule a pod for placement onto a node. To maintain resilience and high availability for monitoring components, ensure that enough nodes are available and match all constraints when you configure a node selector constraint to move a component. Additional resources Understanding how to update labels on nodes Placing pods on specific nodes using node selectors Placing pods relative to other pods using affinity and anti-affinity rules Controlling pod placement by using pod topology spread constraints Kubernetes documentation about node selectors 2.6.2. Moving monitoring components to different nodes To specify the nodes in your cluster on which monitoring stack components will run, configure the nodeSelector constraint in the component's ConfigMap object to match labels assigned to the nodes. Note You cannot add a node selector constraint directly to an existing scheduled pod. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node-name> <node-label> Edit the ConfigMap object: To move a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...> 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node-label-1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. To move a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...> 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node-label-1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are moved to the new nodes automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When you save changes to a monitoring config map, the pods and other resources in the project might be redeployed. The running monitoring processes in that project might also restart. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects 2.7. Assigning tolerations to monitoring components You can assign tolerations to any of the monitoring stack components to enable moving them to tainted nodes. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To assign tolerations to a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the alertmanagerMain component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" To assign tolerations to a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The new component placement configuration is applied automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps Enabling monitoring for user-defined projects See the OpenShift Container Platform documentation on taints and tolerations See the Kubernetes documentation on taints and tolerations 2.8. Configuring a dedicated service monitor You can configure OpenShift Container Platform core platform monitoring to use dedicated service monitors to collect metrics for the resource metrics pipeline. When enabled, a dedicated service monitor exposes two additional metrics from the kubelet endpoint and sets the value of the honorTimestamps field to true. By enabling a dedicated service monitor, you can improve the consistency of Prometheus Adapter-based CPU usage measurements used by, for example, the oc adm top pod command or the Horizontal Pod Autoscaler. 2.8.1. Enabling a dedicated service monitor You can configure core platform monitoring to use a dedicated service monitor by configuring the dedicatedServiceMonitors key in the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an enabled: true key-value pair as shown in the following sample: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true 1 1 Set the value of the enabled field to true to deploy a dedicated service monitor that exposes the kubelet /metrics/resource endpoint. Save the file to apply the changes automatically. Warning When you save changes to a cluster-monitoring-config config map, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also restart. 2.9. Configuring persistent storage Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume (PV) and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage. Because of the high IO demands, it is advantageous to use local storage. Note See Recommended configurable storage technology . 2.9.1. Persistent storage prerequisites Dedicate sufficient local persistent storage to ensure that the disk does not become full. How much storage you need depends on the number of pods. For information on system requirements for persistent storage, see Prometheus database storage requirements . Verify that you have a persistent volume (PV) ready to be claimed by the persistent volume claim (PVC), one PV for each replica. Because Prometheus and Alertmanager both have two replicas, you need four PVs to support the entire monitoring stack. The PVs are available from the Local Storage Operator, but not if you have enabled dynamically provisioned storage. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Configure local persistent storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: Block in the LocalVolume object. Prometheus cannot use raw block volumes. Important Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 2.9.2. Configuring a local persistent volume claim For monitoring components to use a persistent volume (PV), you must configure a persistent volume claim (PVC). Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To configure a PVC for a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage> See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate . The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi In the above example, the storage class created by the Local Storage Operator is called local-storage . The following example configures a PVC that claims local persistent storage for Alertmanager: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi To configure a PVC for a component that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage> See the Kubernetes documentation on PersistentVolumeClaims for information on how to specify volumeClaimTemplate . The following example configures a PVC that claims local persistent storage for the Prometheus instance that monitors user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi In the above example, the storage class created by the Local Storage Operator is called local-storage . The following example configures a PVC that claims local persistent storage for Thanos Ruler: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are restarted automatically and the new storage configuration is applied. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. 2.9.3. Resizing a persistent storage volume OpenShift Container Platform does not support resizing an existing persistent storage volume used by StatefulSet resources, even if the underlying StorageClass resource used supports persistent volume sizing. Therefore, even if you update the storage field for an existing persistent volume claim (PVC) with a larger size, this setting will not be propagated to the associated persistent volume (PV). However, resizing a PV is still possible by using a manual process. If you want to resize a PV for a monitoring component such as Prometheus, Thanos Ruler, or Alertmanager, you can update the appropriate config map in which the component is configured. Then, patch the PVC, and delete and orphan the pods. Orphaning the pods recreates the StatefulSet resource immediately and automatically updates the size of the volumes mounted in the pods with the new PVC settings. No service disruption occurs during this process. Prerequisites You have installed the OpenShift CLI ( oc ). If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have configured at least one PVC for core OpenShift Container Platform monitoring components. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have configured at least one PVC for components that monitor user-defined projects. Procedure Edit the ConfigMap object: To resize a PVC for a component that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the core monitoring component. 2 Specify the storage class. 3 Specify the new size for the storage volume. The following example configures a PVC that sets the local persistent storage to 100 gigabytes for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi The following example configures a PVC that sets the local persistent storage for Alertmanager to 40 gigabytes: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi To resize a PVC for a component that monitors user-defined projects : Note You can resize the volumes for the Thanos Ruler and Prometheus instances that monitor user-defined projects. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Update the PVC configuration for the monitoring component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the core monitoring component. 2 Specify the storage class. 3 Specify the new size for the storage volume. The following example configures the PVC size to 100 gigabytes for the Prometheus instance that monitors user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi The following example sets the PVC size to 20 gigabytes for Thanos Ruler: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration restart automatically. Warning When you save changes to a monitoring config map, the pods and other resources in the related project might be redeployed. The monitoring processes running in that project might also be restarted. Manually patch every PVC with the updated storage request. The following example resizes the storage size for the Prometheus component in the openshift-monitoring namespace to 100Gi: USD for p in USD(oc -n openshift-monitoring get pvc -l app.kubernetes.io/name=prometheus -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do \ oc -n openshift-monitoring patch pvc/USD{p} --patch '{"spec": {"resources": {"requests": {"storage":"100Gi"}}}}'; \ done Delete the underlying StatefulSet with the --cascade=orphan parameter: USD oc delete statefulset -l app.kubernetes.io/name=prometheus --cascade=orphan 2.9.4. Modifying the retention time for Prometheus metrics data By default, the OpenShift Container Platform monitoring stack configures the retention time for Prometheus data to be 15 days. You can modify the retention time to change how soon the data is deleted. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To modify the retention time for the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add your retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification> Substitute <time_specification> with a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). The following example sets the retention time to 24 hours for the Prometheus instance that monitors core OpenShift Container Platform components: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h To modify the retention time for the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> Substitute <time_specification> with a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). The following example sets the retention time to 24 hours for the Prometheus instance that monitors user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h Save the file to apply the changes. The pods affected by the new configuration are restarted automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to a monitoring config map, the pods and other resources in the related project might be redeployed. The running monitoring processes in that project might also be restarted. 2.9.5. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have installed the OpenShift CLI ( oc ). A cluster administrator has enabled monitoring for user-defined projects. You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. Warning Saving changes to a monitoring config map might restart monitoring processes and redeploy the pods and other resources in the related project. The running monitoring processes in that project might also restart. Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration automatically restart. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. Enabling monitoring for user-defined projects Understanding persistent storage Optimizing storage 2.10. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites If you are configuring core OpenShift Container Platform monitoring components: You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects: You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. You have set up authentication credentials for the remote write endpoint. Caution To reduce security risks, avoid sending metrics to an endpoint via unencrypted HTTP or without using authentication. Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheusK8s . Add an endpoint URL and authentication credentials in this section: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write.endpoint" <endpoint_authentication_credentials> For endpoint_authentication_credentials substitute the credentials for the endpoint. Currently supported authentication methods are basic authentication ( basicAuth ) and client TLS ( tlsConfig ) authentication. The following example configures basic authentication: basicAuth: username: <usernameSecret> password: <passwordSecret> Substitute <usernameSecret> and <passwordSecret> accordingly. The following sample shows basic authentication configured with remoteWriteAuth for the name values and user and password for the key values. These values contain the endpoint authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write.endpoint" basicAuth: username: name: remoteWriteAuth key: user password: name: remoteWriteAuth key: password The following example configures client TLS authentication: tlsConfig: ca: <caSecret> cert: <certSecret> keySecret: <keySecret> Substitute <caSecret> , <certSecret> , and <keySecret> accordingly. The following sample shows a TLS authentication configuration using selfsigned-mtls-bundle for the name values and ca.crt for the ca key value, client.crt for the cert key value, and client.key for the keySecret key value: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write.endpoint" tlsConfig: ca: secret: name: selfsigned-mtls-bundle key: ca.crt cert: secret: name: selfsigned-mtls-bundle key: client.crt keySecret: name: selfsigned-mtls-bundle key: client.key Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write.endpoint" <endpoint_authentication_credentials> <write_relabel_configs> For <write_relabel_configs> substitute a list of write relabel configurations for metrics that you want to send to the remote endpoint. The following sample shows how to forward a single metric called my_metric : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: "https://remote-write.endpoint" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep See the Prometheus relabel_config documentation for information about write relabel configuration options. If required, configure remote write for the Prometheus instance that monitors user-defined projects by changing the name and namespace metadata values as follows: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write.endpoint" <endpoint_authentication_credentials> <write_relabel_configs> Note The Prometheus config map component is called prometheusK8s in the cluster-monitoring-config ConfigMap object and prometheus in the user-workload-monitoring-config ConfigMap object. Save the file to apply the changes to the ConfigMap object. The pods affected by the new configuration restart automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning Saving changes to a monitoring ConfigMap object might redeploy the pods and other resources in the related project. Saving changes might also restart the running monitoring processes in that project. Additional resources See Setting up remote write compatible endpoints for steps to create a remote write compatible endpoint (such as Thanos). See Tuning remote write settings for information about how to optimize remote write settings for different use cases. For information about additional optional fields, please refer to the API documentation. 2.11. Controlling the impact of unbound metrics attributes in user-defined projects Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. 2.11.1. Setting a scrape sample limit for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. Warning If you set a sample limit, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have created the user-workload-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Save the file to apply the changes. The limit is applied automatically. Note Configurations applied to the user-workload-monitoring-config ConfigMap object are not activated unless a cluster administrator has enabled monitoring for user-defined projects. Warning When changes are saved to the user-workload-monitoring-config ConfigMap object, the pods and other resources in the openshift-user-workload-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted. 2.11.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have enabled monitoring for user-defined projects. You have created the user-workload-monitoring-config ConfigMap object. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule will be deployed. 3 The TargetDown alert will fire if the target cannot be scraped or is not available for the for duration. 4 The message that will be output when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert will fire when the defined scrape sample threshold is reached or exceeded for the specified for duration. 8 The message that will be output when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example the alert will fire when the number of samples per target scrape has exceeded 80% of the enforced sample limit of 50000 . The for duration must also have passed before the alert will fire. The <number> in the expression scrape_samples_scraped/<number> > <threshold> must match the enforcedSampleLimit value defined in the user-workload-monitoring-config ConfigMap object. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml Additional resources Creating a user-defined workload monitoring config map Enabling monitoring for user-defined projects See Determining why Prometheus is consuming a lot of disk space for steps to query which metrics have the highest number of scrape samples.
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |", "oc apply -f user-workload-monitoring-config.yaml", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4", "oc label nodes <node-name> <node-label>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 20Gi", "for p in USD(oc -n openshift-monitoring get pvc -l app.kubernetes.io/name=prometheus -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do oc -n openshift-monitoring patch pvc/USD{p} --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\":\"100Gi\"}}}}'; done", "oc delete statefulset -l app.kubernetes.io/name=prometheus --cascade=orphan", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials>", "basicAuth: username: <usernameSecret> password: <passwordSecret>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" basicAuth: username: name: remoteWriteAuth key: user password: name: remoteWriteAuth key: password", "tlsConfig: ca: <caSecret> cert: <certSecret> keySecret: <keySecret>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" tlsConfig: ca: secret: name: selfsigned-mtls-bundle key: ca.crt cert: secret: name: selfsigned-mtls-bundle key: client.crt keySecret: name: selfsigned-mtls-bundle key: client.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11", "oc apply -f monitoring-stack-alerts.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/configuring-the-monitoring-stack
Chapter 6. Configuring the Shared File Systems service (manila)
Chapter 6. Configuring the Shared File Systems service (manila) With the Shared File Systems service (manila), you can provision shared file systems that multiple compute instances, bare-metal nodes, or containers can consume. Cloud administrators create share types to prepare the share service and enable end users to create and manage shares. Prerequisites An end user requires at least one share type to use the Shared File Systems service. For back ends where driver_handles_share_servers=False , a cloud administrator configures the requisite networking in advance rather than dynamically in the shared file system back end. For a CephFS through NFS back end, a cloud administrator deploys Red Hat OpenStack Platform (RHOSP) director with isolated networks and environment arguments and a custom network_data file to create an isolated StorageNFS network for NFS exports. After deployment, before the overcloud is used, the administrator creates a corresponding Networking service (neutron) StorageNFS shared provider network that maps to the isolated StorageNFS network of the data center. For a Compute instance to connect to this shared provider network, the user must add an additional neutron port. 6.1. Configuring Shared File Systems service back ends When cloud administrators use Red Hat OpenStack Platform (RHOSP) director to deploy the Shared File Systems service (manila), they can choose one or more supported back ends, such as native CephFS, CephFS-NFS, NetApp, Dell EMC Unity, and others. For more information about native CephFS and CephFS-NFS, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . For a complete list of supported back-end appliances and drivers, see the Manila section of the Red Hat Knowledge Article, Component, Plug-In, and Driver Support in Red Hat OpenStack Platform . 6.1.1. Configuring multiple back ends A back end is a storage system or technology that is paired with the Shared File Systems service (manila) driver to export file systems. The Shared File Systems service requires at least one back end to operate. In many cases, one back end is sufficient. However, you can also use multiple back ends in a single Shared File Systems service installation. Important Red Hat OpenStack Platform (RHOSP) does not support multiple instances of the same back end to a Shared File Systems service deployment. For example, you cannot add two Red Hat Ceph Storage clusters as back ends within the same deployment. CephFS native and CephFS-NFS are considered one back end with different protocols. The scheduler for the Shared File Systems service determines the destination back end for share creation requests. A single back end in the Shared File Systems service can expose multiple storage pools. When you configure multiple back ends, the scheduler chooses one storage pool to create a resource from all the pools exposed by all configured back ends. This process is abstracted from the end user. End users see only the capabilities that are exposed by the cloud administrator. 6.1.2. Deploying multiple back ends By default, a standard Shared File Systems service (manila) deployment environment file has a single back end. Use the following example procedure to add multiple back ends to the Shared File Systems service and deploy an environment with a CephFS-NFS and a NetApp back end. Prerequisites At least two back ends. If a back end requires a custom container, you must use one from the Red Hat Ecosystem Catalog instead of the standard Shared File Systems service container. For example, if you want to use a Dell EMC Unity storage system back end with Ceph, choose the Dell EMC Unity container from the catalog. Procedure Create a storage customization YAML file. You can use this file to provide any values or overrides that suit your environment: Configure the storage customization YAML file to include any overrides, including enabling multiple back ends: Replace the values in angle brackets <> with the correct values for your YAML file. Specify the back-end templates by using the openstack overcloud deploy command. The example configuration enables the Shared File Systems service with a NetApp back end and a CephFS-NFS back end. Note Execute source ~/stackrc before issuing the openstack overcloud deploy command. Additional resources For more information about the ManilaEnabledShareProtocols parameter, see Section 6.1.4, "Overriding allowed NAS protocols" . For more information about the deployment command, see Director Installation and Usage . 6.1.3. Confirming deployment of multiple back ends Use the manila service-list command to verify that your back ends deployed successfully. If you use a health check on multiple back ends, a ping test returns a response even if one of the back ends is unresponsive, so this is not a reliable way to verify your deployment. Procedure Log in to the undercloud host as the stack user. Source the overcloudrc credentials file: Confirm the list of Shared File Systems service back ends: The status of each successfully deployed back end shows enabled and the state shows up . 6.1.4. Overriding allowed NAS protocols The Shared File Systems service can export shares in one of many network attached storage (NAS) protocols, such as NFS, CIFS, or CEPHFS. By default, the Shared File Systems service enables all of the NAS protocols supported by the back ends in a deployment. As a Red Hat OpenStack Platform (RHOSP) administrator, you can override the ManilaEnabledShareProtocols parameter and list only the protocols that you want to allow in your cloud. For example, if back ends in your deployment support both NFS and CIFS, you can override the default value and enable only one protocol. Procedure Log in to the undercloud host as the stack user. Source the overcloudrc credentials file: Create a storage customization YAML file. This file can be used to provide any values or overrides that suit your environment: Configure the ManilaEnabledShareProtocols parameter with the values that you want: Include the environment file that contains your new content in the openstack overcloud deploy command by using the -e option. Ensure that you include all other environment files that are relevant to your deployment. Note The deployment does not validate the settings. The NAS protocols that you assign must be supported by the back ends in your Shared File Systems service deployment. 6.1.5. Viewing back-end capabilities The scheduler component of the Shared File Systems service (manila) makes intelligent placement decisions based on several factors such as capacity, provisioning configuration, placement hints, and the capabilities that the back-end storage system driver detects and exposes. Procedure Run the following command to view the available capabilities: Related information To influence placement decisions, as an administrator, you can use share types and extra specifications. For more information about share types, see Creating share types . 6.2. Creating share types Share types serve as hints to the Shared File Systems service scheduler to perform placement decisions. Red Hat OpenStack Platform (RHOSP) director configures the Shared File Systems service with a default share type named default, but does not create the share type. Important An end user requires at least one share type to use the Shared File Systems service. Procedure After you deploy the overcloud, run the following command as the cloud administrator to create a share type: The <spec_driver_handles_share_servers> parameter is a Boolean value: For CephFS through NFS or native CephFS, the value is false. For other back ends, the value can be true or false. Set <spec_driver_handles_share_servers> to match the value of the Manila<backend>DriverHandlesShareServers parameter. For example, if you use a NetApp back end, the parameter is called ManilaNetappDriverHandlesShareServers . Add specifications to the default share type or create additional share types to use with multiple configured back ends. For example, configure the default share type to select a CephFS back end and an additional share type that uses a NetApp driver_handles_share_servers=True back end: Note By default, share types are public, which means they are visible to and usable by all cloud projects. However, you can create private share types for use within specific projects. Additional resources For more information about how to make private share types or set additional share-type options, see the Security and Hardening Guide . 6.3. Comparing common capabilities of share types Share types define the common capabilities of shares. Review the common capabilities of share types to understand what you can do with your shares. Table 6.1. Capabilities of share types Capability Values Description driver_handles_share_servers true or false Grants permission to use share networks to create shares. snapshot_support true or false Grants permission to create snapshots of shares. create_share_from_snapshot_support true or false Grants permission to create clones of share snapshots. revert_to_snapshot_support true or false Grants permission to revert your shares to the most recent snapshot. mount_snapshot_support true or false Grants permission to export and mount your snapshots. replication_type dr Grants permission to create replicas for disaster recovery. Only one active export is allowed at a time. readable Grants permission to create read-only replicas. Only one writable, active export is allowed at a time. writable Grants permission to create read/write replicas. Any number of active exports are allowed at a time per share. availability_zones a list of one or more availability zones Grants permission to create shares only on the availability zones listed. 6.4. Planning networking for shared file systems Shared file systems are accessed over a network. It is important to plan the networking on your cloud to ensure that end user clients can connect their shares to workloads that run on Red Hat OpenStack Platform (RHOSP) virtual machines, bare-metal servers, and containers. Depending on the level of security and isolation required for end users, as an administrator, you can set the driver_handles_share_servers parameter to true or false. If you set the driver_handles_share_servers parameter to true, this enables the service to export shares to end user-defined share networks with the help of isolated share servers. When the driver_handles_share_servers parameter equals true, users can provision their workloads on self-service share networks. This ensures that their shares are exported by completely isolated NAS file servers on dedicated network segments. The share networks used by end users can be the same as the private project networks that they can create. As an administrator, you must ensure that the physical network to which you map these isolated networks extends to your storage infrastructure. You must also ensure that the network segmentation style by project networks is supported by the storage system used. Storage systems, such as NetApp ONTAP and Dell EMC PowerMax, Unity, and VNX, do not support virtual overlay segmentation styles such as GENEVE or VXLAN. As an alternative, you can terminate the overlay networking at top-of-rack switches and use a more primitive form of networking for your project networks, such as VLAN. Another alternative is to allow VLAN segments on shared provider networks or provide access to a pre-existing segmented network that is already connected to your storage system. If you set the driver_handles_share_servers parameter to false, users cannot create shares on their own share networks. Instead, they must connect their clients to the network configured by the cloud administrator. When the driver_handles_share_servers parameter equals false, director can create a dedicated shared storage network for you. For example, when you deploy the native CephFS back end with standard director templates, director creates a shared provider network called Storage . When you deploy CephFS through the NFS back end, the shared provider network is called StorageNFS . Your end users must connect their clients to the shared storage network to access their shares. Not all shared file system storage drivers support both modes of operation. Regardless of which mode you choose, the service ensures hard data path multi-tenancy isolation guarantees. If you want to offer hard network path multi-tenancy isolation guarantees to tenant workloads as part of a self-service model, you must deploy with back ends that support the driver_handles_share_servers driver mode. For information about network connectivity to the share, see Section 6.5, "Ensuring network connectivity to the share" 6.5. Ensuring network connectivity to the share Clients that need to connect to a file share must have network connectivity to one or more of the export locations for that share. There are many ways to configure networking with the Shared File Systems service, including using network plugins. When the driver_handles_share_servers parameter for a share type equals true, a cloud user can create a share network with the details of a network to which the compute instance attaches and then reference it when creating shares. When the driver_handles_share_servers parameter for a share type equals false, a cloud user must connect their compute instance to the shared storage network. For more information about how to configure and validate network connectivity to a shared network, see Section 7.5, "Connecting to a shared network to access shares" . 6.6. Changing the default quotas in the Shared File Systems service To prevent system capacities from being exhausted without notification, cloud administrators can configure quotas. Quotas are operational limits. The Shared File Systems service (manila) enforces some sensible limits by default. These limits are called default quotas. Cloud administrators can override default quotas so that individual projects have different consumption limits. 6.6.1. Updating quotas for projects, users, and share types As a cloud administrator, you can list the quotas for a project or user by using the manila quota-show command. You can update quotas for all users in a project, or a specific project user, or a share type used by the project users. You can update the following quotas for the target you choose: shares : Number of shares that you can create. snapshots : Number of snapshots that you can create. gigabytes : Total size in GB that you can allocate for all shares. snapshot-gigabytes : Total size in GB that you can allocate for all snapshots of shares. share-networks : Total number of share networks that you can create. share_groups : Total number of share groups that you can create. share_group_snapshots : Total number of share group snapshots that you can create. share-replicas : Total number of share replicas that you can create. replica-gigabytes : Total size in GB that you can allocate across all share replicas. Note You can only specify share-type quotas at the project level. You cannot set share-type quotas for specific project users. Important In the following procedures, enter the values carefully. The Shared File Systems service does not detect or report incorrect values. Procedure You can use the following commands to view quotas. If you include the --user option, you can view the quota for a specific user in the specified project. If you omit the --user option, you can view the quotas that apply to all users for the specified project. Similarly, if you include the optional --share-type , you can view the quota for a specific share type as it relates to the project. The --user and --share-type options are mutually exclusive. Example for a project: Example for a project user: Example for a project for a specific share type: Use the manila quota-update command to update the quotas. You can update quotas for all project users, a specific project user, or a share type in a project: Update quotas for all users in a project: Replace <id> with the project ID. This value must be the project ID, not the project name. Update quotas for a specific user in a project: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <user_id> with the user ID. The value must be the user ID, not the user name. Update quotas for all users who use a specific share type: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <share_type> with the name or ID of the share type that you want to apply the quota to. Verification The quota-update command does not produce any output. Use the quota-show command to verify that a quota was successfully updated. 6.6.2. Resetting quotas for projects, users, and share types You can remove quota overrides to return quotas to their default values. The target entity is restricted by the default quota that applies to all projects with no overrides. Procedure Use the manila quota-delete command to return quotas to default values. You can return quotas to default values for all project users, a specific project user, or a share type in a project: Reset project quotas: Replace <id> with the project ID. This value must be the project ID, not the project name. Reset quotas for a specific user: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <user_id> with the user ID. The value must be the user ID, not the user name. Reset quotas for a share type used by project users: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <share_type> with the name or ID of the share type the quota must be reset for. Verification The quota-delete command does not produce any output. Use the quota-show command to verify whether a quota was successfully reset. List the default quotas for all projects. Default quotas apply to projects that have no overrides. 6.6.3. Updating the default quota values for Shared File Systems service projects As a cloud administrator, you can update default quotas that apply to all projects that do not already have quota overrides. Procedure View the usage statement of the manila quota-class-update command: Note The parameter <class_name> is a positional argument. It identifies the quota class for which the quotas are set. Set the value of this parameter to default . No other quota classes are supported. You can update the values for any of the following optional parameters: --shares <shares> Adds a new value for the shares quota. --snapshots <snapshots> Adds a new value for the snapshots quota. --gigabytes <gigabytes> Adds a new value for the gigabytes quota. --snapshot-gigabytes <snapshot_gigabytes> or --snapshot_gigabytes <snapshot_gigabytes> Adds a new value for the snapshot_gigabytes quota. --share-networks <share_networks> or --share_networks <share_networks> Adds a new value for the share_networks quota. --share-replicas <share_replicas> , --share_replicas <share_replicas> , or --replicas <share_replicas> Adds a new value for the share_replicas quota. --replica-gigabytes <replica_gigabytes> or --replica_gigabytes <replica_gigabytes> Adds a new value for the replica_gigabytes quota. Use the information from the usage statement to update the default quotas. The following example updates the default quotas for shares and gigabytes :
[ "vi /home/stack/templates/storage_customizations.yaml", "parameter_defaults: ManilaEnabledShareProtocols: - NFS ManilaNetappLogin: '<login_name>' ManilaNetappPassword: '<password>' ManilaNetappServerHostname: '<netapp-hostname>' ManilaNetappVserver: '<netapp-vserver>' ManilaNetappDriverHandlesShareServers: 'false'", "[stack@undercloud ~]USD source ~/stackrc openstack overcloud deploy --timeout 100 --stack overcloud --templates /usr/share/openstack-tripleo-heat-templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -r /home/stack/templates/roles/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml -e /home/stack/templates/storage_customizations.yaml", "source ~/overcloudrc", "manila service-list +----+--------+--------+------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | +----+--------+--------+------+---------+-------+----------------------------+ | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2021-03-24T16:49:09.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2021-03-24T16:49:12.000000 | | 8 | manila-share | hostgroup@tripleo_netapp | nova | enabled | up | 2021-03-24T16:49:06.000000 |", "source ~/overcloudrc", "vi /home/stack/templates/storage_customizations.yaml", "parameter_defaults: ManilaEnabledShareProtocols: - NFS - CEPHFS", "openstack overcloud deploy -e /home/stack/templates/storage_customizations.yaml", "manila pool-list --detail +------------------------------------+----------------------------+ | Property | Value | +------------------------------------+----------------------------+ | name | hostgroup@cephfs#cephfs | | pool_name | cephfs | | total_capacity_gb | 1978 | | free_capacity_gb | 1812 | | driver_handles_share_servers | False | | snapshot_support | True | | create_share_from_snapshot_support | False | | revert_to_snapshot_support | False | | mount_snapshot_support | False | +------------------------------------+----------------------------+ +------------------------------------+-----------------------------------+ | Property | Value | +------------------------------------+-----------------------------------+ | name | hostgroup@tripleo_netapp#aggr1_n1 | | pool_name | aggr1_n1 | | total_capacity_gb | 6342.1 | | free_capacity_gb | 6161.99 | | driver_handles_share_servers | False | | mount_snapshot_support | False | | replication_type | None | | replication_domain | None | | sg_consistent_snapshot_support | host | | ipv4_support | True | | ipv6_support | False | +------------------------------------+-----------------------------------+ +------------------------------------+-----------------------------------+ | Property | Value | +------------------------------------+-----------------------------------+ | name | hostgroup@tripleo_netapp#aggr1_n2 | | pool_name | aggr1_n2 | | total_capacity_gb | 6342.1 | | free_capacity_gb | 6209.26 | | snapshot_support | True | | create_share_from_snapshot_support | True | | revert_to_snapshot_support | True | | driver_handles_share_servers | False | | mount_snapshot_support | False | | replication_type | None | | replication_domain | None | | sg_consistent_snapshot_support | host | | ipv4_support | True | | ipv6_support | False | +------------------------------------+-----------------------------------+", "manila type-create default <spec_driver_handles_share_servers>", "(overcloud) [stack@undercloud-0 ~]USD manila type-create default false --extra-specs share_backend_name='cephfs' (overcloud) [stack@undercloud-0 ~]USD manila type-create netapp true --extra-specs share_backend_name='tripleo_netapp'", "manila quota-show", "manila quota-show --project af2838436f3f4cf6896399dd97c4c050 +-----------------------+----------------------------------+ | Property | Value | +-----------------------+----------------------------------+ | gigabytes | 1000 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 49 | | share_networks | 10 | | share_replicas | 100 | | shares | 50 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+----------------------------------+", "manila quota-show --project af2838436f3f4cf6896399dd97c4c050 --user 81ebb491dd0e4c2aae0775dd564e76d1 +-----------------------+----------------------------------+ | Property | Value | +-----------------------+----------------------------------+ | gigabytes | 500 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 49 | | share_networks | 10 | | share_replicas | 100 | | shares | 25 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+----------------------------------+", "manila quota-show --project af2838436f3f4cf6896399dd97c4c050 --share-type dhss_false +---------------------+----------------------------------+ | Property | Value | +---------------------+----------------------------------+ | gigabytes | 1000 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_replicas | 100 | | shares | 15 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +---------------------+----------------------------------+", "manila quota-update <id> [--shares <share_quota> --gigabytes <share_gigabytes_quota> ...]", "manila quota-update <id> --user <user_id> [--shares <new_share_quota> --gigabytes <new_share_gigabytes_quota> ...]", "manila quota-update <id> --share-type <share_type> [--shares <new_share_quota>30 --gigabytes <new-share_gigabytes_quota> ...]", "manila quota-delete --project <id>", "manila quota-delete --project <id> --user <user_id>", "manila quota-delete --project <id> --share-type <share_type>", "manila quota-class-show default", "manila help quota-class-update usage: manila quota-class-update [--shares <shares>] [--snapshots <snapshots>] [--gigabytes <gigabytes>] [--snapshot-gigabytes <snapshot_gigabytes>] [--share-networks <share_networks>] [--share-replicas <share_replicas>] [--replica-gigabytes <replica_gigabytes>] <class_name>", "manila quota-class-update default --shares 30 --gigabytes 512 manila quota-class-show default +-----------------------+---------+ | Property | Value | +-----------------------+---------+ | gigabytes | 512 | | id | default | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 50 | | share_networks | 10 | | share_replicas | 100 | | shares | 30 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+---------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/storage_guide/assembly_manila-configuring-the-shared-file-systems-service_assembly-swift
Networking
Networking OpenShift Dedicated 4 Configuring OpenShift Dedicated networking Red Hat OpenShift Documentation Team
[ "oc get -n openshift-dns-operator deployment/dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h", "oc get clusteroperator/dns", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE dns 4.1.15-0.11 True False False 92m", "oc describe dns.operator/default", "Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - name: example-server 1 zones: - example.com 2 forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 protocolStrategy: \"\" 7 transportConfig: {} 8 upstreams: - type: SystemResolvConf 9 - type: Network address: 1.2.3.4 10 port: 53 11 status: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions:", "oc describe clusteroperators/dns", "Status: Conditions: Last Transition Time: <date> Message: DNS \"default\" is available. Reason: AsExpected Status: True Type: Available Last Transition Time: <date> Message: Desired and current number of DNSes are equal Reason: AsExpected Status: False Type: Progressing Last Transition Time: <date> Reason: DNSNotDegraded Status: False Type: Degraded Last Transition Time: <date> Message: DNS default is upgradeable: DNS Operator can be upgraded Reason: DNSUpgradeable Status: True Type: Upgradeable", "oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge", "oc get configmap/dns-default -n openshift-dns -o yaml", "errors log . { class all }", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge", "oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge", "oc get dnses.operator -A -oyaml", "logLevel: Trace operatorLogLevel: Debug", "oc logs -n openshift-dns ds/dns-default", "oc edit dns.operator.openshift.io/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2", "get configmap/dns-default -n openshift-dns -o yaml", "cache 3600 { denial 9984 2400 }", "patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'", "oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}'", "\"Unmanaged\"", "oc adm taint nodes <node_name> dns-only=abc:NoExecute 1", "oc edit dns.operator/default", "spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" 1 operator: Equal value: abc tolerationSeconds: 3600 2", "spec: nodePlacement: nodeSelector: 1 node-role.kubernetes.io/control-plane: \"\"", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: - example.com 2 forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10", "oc get configmap/dns-default -n openshift-dns -o yaml", "apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com", "nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists", "httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE", "httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl", "oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"", "openssl x509 -in custom-cert.pem -noout -subject subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift", "oc describe --namespace=openshift-ingress-operator ingresscontroller/default", "oc describe clusteroperators/ingress", "oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>", "oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name> 1 namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs> 2 replicas: 1 3 domain: <custom_domain> 4", "oc create -f custom-ingress-controller.yaml", "oc --namespace openshift-ingress-operator get ingresscontrollers", "NAME AGE default 10m", "oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key", "oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default", "oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT", "oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos", "Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: <none> Events: <none>", "oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF", "oc apply -f - <<EOF apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus namespace: openshift-ingress-operator spec: secretTargetRef: - parameter: bearerToken name: thanos-token key: token - parameter: ca name: thanos-token key: ca.crt EOF", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - \"\" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get", "oc apply -f thanos-metrics-reader.yaml", "oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator", "oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role=\"worker\",service=\"kube-state-metrics\"})' 5 authModes: \"bearer\" authenticationRef: name: keda-trigger-auth-prometheus", "oc apply -f ingress-autoscaler.yaml", "oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:", "replicas: 3", "oc get pods -n openshift-ingress", "NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "2", "oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge", "ingresscontroller.operator.openshift.io/default patched", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "3", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container", "oc -n openshift-ingress logs deployment.apps/router-default -c logs", "2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3", "oc create -f <name>-ingress-controller.yaml 1", "oc --all-namespaces=true get ingresscontrollers", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "oc edit IngressController", "spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY", "apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN", "frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'", "oc -n openshift-ingress-operator edit ingresscontroller/default", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions: 1 request: 2 - name: X-Forwarded-Client-Cert 3 action: type: Set 4 set: value: \"%{+Q}[ssl_c_der,base64]\" 5 - name: X-SSL-Client-Der action: type: Delete", "oc edit IngressController", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append", "oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true 1", "oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"", "oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false 1", "oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"false\"", "oc -n openshift-ingress-operator edit ingresscontroller/default", "spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork", "spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService", "spec: endpointPublishingStrategy: private: protocol: PROXY type: Private", "oc edit ingresses.config/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2", "oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed", "oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name>", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application", "oc edit -n openshift-ingress-operator ingresscontrollers/default", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"", "oc get pods -n openshift-ingress", "NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h", "oc rsh <router_pod_name> cat metrics-auth/statsUsername", "oc rsh <router_pod_name> cat metrics-auth/statsPassword", "oc describe pod <router_pod>", "curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics", "curl -u user:password https://<router_IP>:<stats_port>/metrics -k", "curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics", "HELP haproxy_backend_connections_total Total number of connections. TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route-alt\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route01\"} 0 HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type=\"current\"} 11 haproxy_exporter_server_threshold{type=\"limit\"} 500 HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend=\"fe_no_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"fe_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"public\"} 119070 HELP haproxy_server_bytes_in_total Current total of incoming bytes. TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_no_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"docker-registry-5-nk5fz\",route=\"docker-registry\",server=\"10.130.0.89:5000\",service=\"docker-registry\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"hello-rc-vkjqx\",route=\"hello-route\",server=\"10.130.0.90:8080\",service=\"hello-svc-1\"} 0", "http://<user>:<password>@<router_IP>:<stats_port>", "http://<user>:<password>@<router_ip>:1936/metrics;csv", "oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http", "oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge", "oc get cm default-errorpages -n openshift-ingress", "NAME DATA AGE default-errorpages 2 25s 1", "oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http", "oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http", "oc new-project test-ingress", "oc new-app django-psql-example", "curl -vk <route_hostname>", "curl -vk <route_hostname>", "oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3", "oc apply -f deny-by-default.yaml", "networkpolicy.networking.k8s.io/deny-by-default created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "networkpolicy.networking.k8s.io/web-allow-external created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "networkpolicy.networking.k8s.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "networkpolicy.networking.k8s.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4", "I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface", "{ \"type\": \"sdnToOvn\" }", "{ \"type\": \"sdnToOvn\", \"sdn_to_ovn\": { \"transit_ipv4\": \"192.168.255.0/24\", \"join_ipv4\": \"192.168.255.0/24\", \"masquerade_ipv4\": \"192.168.255.0/24\" } }", "ocm post /api/clusters_mgmt/v1/clusters/{cluster_id}/migrations 1 --body=myjsonfile.json 2", "{ \"kind\": \"ClusterMigration\", \"href\": \"/api/clusters_mgmt/v1/clusters/2gnts65ra30sclb114p8qdc26g5c8o3e/migrations/2gois8j244rs0qrfu9ti2o790jssgh9i\", \"id\": \"7sois8j244rs0qrhu9ti2o790jssgh9i\", \"cluster_id\": \"2gnts65ra30sclb114p8qdc26g5c8o3e\", \"type\": \"sdnToOvn\", \"state\": { \"value\": \"scheduled\", \"description\": \"\" }, \"sdn_to_ovn\": { \"transit_ipv4\": \"100.65.0.0/16\", \"join_ipv4\": \"100.66.0.0/16\" }, \"creation_timestamp\": \"2025-02-05T14:56:34.878467542Z\", \"updated_timestamp\": \"2025-02-05T14:56:34.878467542Z\" }", "ocm get cluster USDcluster_id/migration 1", "oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "oc expose svc hello-openshift", "oc get routes -o yaml <name of resource> 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift", "oc get ingresses.config/cluster -o jsonpath={.spec.domain}", "oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1", "oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header=max-age=31536000; includeSubDomains;preload\"", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0", "oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: routename HSTS: max-age=0", "oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"", "oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"", "ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')", "curl USDROUTE_NAME -k -c /tmp/cookie_jar", "curl USDROUTE_NAME -k -b /tmp/cookie_jar", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY", "apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN", "frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'", "apiVersion: route.openshift.io/v1 kind: Route spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5", "oc -n app-example create -f app-example-route.yaml", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1", "metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10", "metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12", "metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24", "metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1", "oc create -f example-ingress.yaml", "oc get routes -o yaml", "apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3", "oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>", "oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt", "secret/dest-ca-cert created", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "oc create route passthrough route-passthrough-secured --service=frontend --port=8080", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend", "oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \\ 1 --namespace=<current-namespace> 2", "oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...]", "oc apply -f <route.yaml> 1" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/networking/index
Configuring network functions virtualization
Configuring network functions virtualization Red Hat OpenStack Platform 17.1 Planning and configuring network functions virtualization (NFV) in Red Hat Openstack Platform OpenStack Documentation Team [email protected]
[ "openstack port set --disable-port-security <port-id>", "openstack network set --disable-port-security <network-id>", "members - type: ovs_dpdk_port name: dpdk0 driver: mlx5_core members: - type: interface name: enp3s0f0", "yum install -y mstflint mstconfig -d <PF PCI BDF> q ESWITCH_IPV4_TTL_MODIFY_ENABLE", "mstconfig -d <PF PCI BDF> s ESWITCH_IPV4_TTL_MODIFY_ENABLE=0`", "openstack baremetal introspection data save <UUID> | jq .numa_topology", "{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }", "cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle", "[stack@director ~]USD sudo subscription-manager register", "[stack@director ~]USD sudo subscription-manager list --available --all --matches=\"Red Hat OpenStack\" Subscription Name: Name of SKU Provides: Red Hat Single Sign-On Red Hat Enterprise Linux Workstation Red Hat CloudForms Red Hat OpenStack Red Hat Software Collections (for RHEL Workstation) SKU: SKU-Number Contract: Contract-Number Pool ID: {Pool-ID}-123456 Provides Management: Yes Available: 1 Suggested: 1 Service Level: Support-level Service Type: Service-Type Subscription Type: Sub-type Ends: End-date System Type: Physical", "[stack@director ~]USD sudo subscription-manager attach --pool={Pool-ID}-123456", "subscription-manager repos --disable=*", "sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=ansible-2.9-for-rhel-9-x86_64-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-nfv-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms", "[stack@director ~]USD sudo dnf update -y [stack@director ~]USD sudo reboot", "source ~/stackrc", "openstack overcloud roles generate -o /home/stack/templates/roles_data_compute_sriov.yaml Controller ComputeSriov", "openstack overcloud roles generate -o /home/stack/templates/ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov Compute:ComputeOvsHwOffload", "sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_sriov.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml", "lspci -nn -s <pci_device_address>", "3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)", "openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'", "source ~/stackrc", "parameter_defaults: ComputeSriovParameters: NovaPCIPassthrough: - vendor_id: \"<vendor_id>\" product_id: \"<product_id>\" address: <NIC_address> physical_network: \"<physical_network>\"", "parameter_defaults: ComputeSriovParameters: NovaPCIPassthrough: - vendor_id: \"<vendor_id>\" product_id: \"<product_id>\" address: <NIC_address> physical_network: \"<physical_network>\" NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - AggregateInstanceExtraSpecsFilter", "source ~/stackrc", "ComputeSriovParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127", "parameter_defaults: NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''", "source ~/stackrc", "- name: ComputeSriov ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml", "- name: ComputeSriov ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt isolcpus=9-63,73-127' tuned_isolated_cores: '9-63,73-127' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800", "source ~/stackrc", "- type: sriov_pf name: enp196s0f0np0 mtu: 9000 numvfs: 16 use_dhcp: false defroute: false hotplug: true promisc: false", "- name: ComputeSriov count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2", "source ~/stackrc", "- type: sriov_pf name: <interface_name> use_dhcp: false numvfs: <number_of_vfs> promisc: <true/false>", "- type: <bond_type> name: internal_bond bonding_options: mode=<bonding_option> use_dhcp: false members: - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: sriov_vf device: <pf_device_name> vfid: <vf_id> - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: internal_bond addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes", "NovaPCIPassthrough: - address: \"0000:19:0e.3\" trusted: \"true\" physical_network: \"sriov1\" - address: \"0000:19:0e.0\" trusted: \"true\" physical_network: \"sriov2\"", "parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" members: - type: sriov_vf device: eno2 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false - type: sriov_vf device: eno3 vfid: 1 vlan_id: get_param: InternalApiNetworkVlanID spoofcheck: false addresses: - ip_netmask: get_param: InternalApiIpSubnet routes: list_concat_unique: - get_param: InternalApiInterfaceRoutes", "- type: ovs_bridge name: br-bond use_dhcp: true members: - type: vlan vlan_id: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet routes: list_concat_unique: - get_param: ControlPlaneStaticRoutes - type: ovs_bond name: bond_vf ovs_options: \"bond_mode=active-backup\" members: - type: sriov_vf device: p2p1 vfid: 2 - type: sriov_vf device: p2p2 vfid: 2", "- type: ovs_user_bridge name: br-link0 use_dhcp: false mtu: 9000 ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: list_concat_unique: - get_param: TenantInterfaceRoutes members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 ovs_extra: - set port dpdkbond0 bond_mode=balance-slb members: - type: ovs_dpdk_port name: dpdk0 members: - type: sriov_vf device: eno2 vfid: 3 - type: ovs_dpdk_port name: dpdk1 members: - type: sriov_vf device: eno3 vfid: 3", "source ~/stackrc", "openstack overcloud deploy --log-file overcloud_deployment.log --templates /usr/share/openstack-tripleo-heat-templates/ --stack overcloud -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data_compute_sriov.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-sriov.yaml -e /home/stack/templates/sriov-overrides.yaml", "sudo cat /sys/class/net/p4p1/device/sriov_numvfs 10 sudo cat /sys/class/net/p4p2/device/sriov_numvfs 10", "sudo ovs-vsctl show", "b6567fa8-c9ec-4247-9a08-cbf34f04c85f Manager \"ptcp:6640:127.0.0.1\" is_connected: true Bridge br-sriov2 Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov2 Interface phy-br-sriov2 type: patch options: {peer=int-br-sriov2} Port br-sriov2 Interface br-sriov2 type: internal Bridge br-sriov1 Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port phy-br-sriov1 Interface phy-br-sriov1 type: patch options: {peer=int-br-sriov1} Port br-sriov1 Interface br-sriov1 type: internal Bridge br-ex Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-tenant Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port br-tenant tag: 305 Interface br-tenant type: internal Port phy-br-tenant Interface phy-br-tenant type: patch options: {peer=int-br-tenant} Port dpdkbond0 Interface dpdk0 type: dpdk options: {dpdk-devargs=\"0000:18:0e.0\"} Interface dpdk1 type: dpdk options: {dpdk-devargs=\"0000:18:0a.0\"} Bridge br-tun Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port vxlan-98140025 Interface vxlan-98140025 type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.37\"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port vxlan-98140015 Interface vxlan-98140015 type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.21\"} Port vxlan-9814009f Interface vxlan-9814009f type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.159\"} Port vxlan-981400cc Interface vxlan-981400cc type: vxlan options: {df_default=\"true\", egress_pkt_mark=\"0\", in_key=flow, local_ip=\"152.20.0.229\", out_key=flow, remote_ip=\"152.20.0.204\"} Bridge br-int Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: netdev Port int-br-tenant Interface int-br-tenant type: patch options: {peer=phy-br-tenant} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port int-br-sriov1 Interface int-br-sriov1 type: patch options: {peer=phy-br-sriov1} Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port br-int Interface br-int type: internal Port int-br-sriov2 Interface int-br-sriov2 type: patch options: {peer=phy-br-sriov2} Port vhu4142a221-93 tag: 1 Interface vhu4142a221-93 type: dpdkvhostuserclient options: {vhost-server-path=\"/var/lib/vhost_sockets/vhu4142a221-93\"} ovs_version: \"2.13.2\"", "cat /proc/net/bonding/<bond_name>", "Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eno3v1 MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Peer Notification Delay (ms): 0 Slave Interface: eno3v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4e:77:94:bd:38:d2 Slave queue ID: 0 Slave Interface: eno4v1 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 4a:74:52:a7:aa:7c Slave queue ID: 0", "sudo ovs-appctl bond/show", "---- dpdkbond0 ---- bond_mode: balance-slb bond may use recirculation: no, Recirc-ID : -1 bond-hash-basis: 0 updelay: 0 ms downdelay: 0 ms next rebalance: 9491 ms lacp_status: off lacp_fallback_ab: false active slave mac: ce:ee:c7:58:8e:b2(dpdk1) slave dpdk0: enabled may_enable: true slave dpdk1: enabled active slave may_enable: true", "openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group", "openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> --vcpus <number>", "openstack flavor set --property sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>", "openstack flavor create <flavor_name> --ram <size_mb> --disk <size_gb> --vcpus <number>", "openstack network create <network_name> --provider-physical-network tenant --provider-network-type vlan --provider-segment <vlan_id> openstack subnet create <name> --network <network_name> --subnet-range <ip_address_cidr> --dhcp", "openstack port create --network <network_name> --vnic-type direct <port_name>", "openstack port create --network <network_name> --vnic-type direct-physical <port_name>", "openstack server create --flavor <flavor> --image <image_name> --nic port-id=<id> <instance_name>", "source ~/stackrc", "openstack overcloud roles generate -o roles_data_compute_ovshwol.yaml Controller Compute:ComputeOvsHwOffload", "openstack overcloud roles generate -o /home/stack/templates/ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov Compute:ComputeOvsHwOffload", "sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_ovshwol.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml", "lspci -nn -s <pci_device_address>", "3b:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [<vendor_id>: <product_id>] (rev 02)", "openstack baremetal introspection data save <baremetal_node_name> | jq '.inventory.interfaces[] | .name, .vendor, .product'", "source ~/stackrc", "parameter_defaults: NeutronOVSFirewallDriver: iptables_hybrid ComputeOvsHwOffloadParameters: IsolCpusList: 2-9,21-29,11-19,31-39 KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=128 intel_iommu=on iommu=pt\" OvsHwOffload: true TunedProfileName: \"cpu-partitioning\" NeutronBridgeMappings: - tenant:br-tenant NovaPCIPassthrough: - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"tenant\" - vendor_id: <vendor-id> product_id: <product-id> address: <address> physical_network: \"null\" NovaReservedHostMemory: 4096 NovaComputeCpuDedicatedSet: 1-9,21-29,11-19,31-39", "parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter - AggregateInstanceExtraSpecsFilter", "source ~/stackrc", "ComputeOvsHwOffloadParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127 TunedProfileName: \"cpu-partitioning\"", "ComputeOvsHwOffloadParameters: IsolCpusList: 9-63,73-127 KernelArgs: default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt numa_balancing=disable processor.max_cstate=0 isolcpus=9-63,73-127 NovaReservedHostMemory: 4096 NovaComputeCpuSharedSet: 0-8,64-72 NovaComputeCpuDedicatedSet: 9-63,73-127 TunedProfileName: \"cpu-partitioning\" OvsHwOffload: true", "parameter_defaults: NeutronNetworkType: vlan NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''", "source ~/stackrc", "- name: ComputeOvsHwOffload ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml", "- name: ComputeOvsHwOffload ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=100 amd_iommu=on iommu=pt isolcpus=9-63,73-127' tuned_isolated_cores: '9-63,73-127' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800", "source ~/stackrc", "- type: sriov_pf name: enp196s0f0np0 mtu: 9000 numvfs: 16 use_dhcp: false defroute: false nm_controlled: true hotplug: true promisc: false link_mode: switchdev", "- name: ComputeOvsHwOffload count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2", "- type: ovs_bridge name: br-tenant mtu: 9000 members: - type: sriov_pf name: p7p1 numvfs: 5 mtu: 9000 primary: true promisc: true use_dhcp: false link_mode: switchdev", "source ~/stackrc", "openstack overcloud deploy --log-file overcloud_deployment.log --templates /usr/share/openstack-tripleo-heat-templates/ --stack overcloud [ -n /home/stack/templates/network_data.yaml \\ ] 1 [ -r /home/stack/templates/roles_data_compute_ovshwol.yaml \\ ] 2 -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-sriov.yaml -e /home/stack/templates/ovshwol-overrides.yaml", "sudo devlink dev eswitch show pci/0000:03:00.0", "pci/0000:03:00.0: mode switchdev inline-mode none encap enable", "sudo devlink dev eswitch set pci/0000:03:00.0 mode switchdev", "openstack port create --network private --vnic-type=direct --binding-profile '{\"capabilities\": [\"switchdev\"]}' direct_port1 --disable-port-security", "sudo ethtool -K <device-name> hw-tc-offload on", "sudo ethtool -L enp3s0f0 combined 3", "openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group", "openstack flavor create <flavor> --ram <size_mb> --disk <size_gb> --vcpus <number>", "openstack flavor set --property sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>", "openstack flavor create <flavor_name> --ram <size_mb> --disk <size_gb> --vcpus <number>", "openstack network create <network_name> --provider-physical-network tenant --provider-network-type vlan --provider-segment <vlan_id> openstack subnet create <name> --network <network_name> --subnet-range <ip_address_cidr> --dhcp", "openstack port create --network <network_name> --vnic-type direct --binding-profile '{\"capabilities\": [\"switchdev\"]} <port_name>", "openstack port create --network <network_name> --vnic-type direct-physical <port_name>", "openstack server create --flavor <flavor> --image <image_name> --nic port-id=<id> <instance_name>", "- type: ovs_bridge name: br-offload mtu: 9000 use_dhcp: false members: - type: linux_bond name: bond-pf bonding_options: \"mode=active-backup miimon=100\" members: - type: sriov_pf name: p5p1 numvfs: 3 primary: true promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: sriov_pf name: p5p2 numvfs: 3 promisc: true use_dhcp: false defroute: false link_mode: switchdev - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bond-pf addresses: - ip_netmask: get_param: TenantIpSubnet", "ethtool -k ens1f0 | grep tc-offload hw-tc-offload: on", "devlink dev eswitch show pci/USD(ethtool -i ens1f0 | grep bus-info | cut -d ':' -f 2,3,4 | awk '{USD1=USD1};1')", "ovs-vsctl get Open_vSwitch . other_config:hw-offload \"true\"", "cat /etc/udev/rules.d/80-persistent-os-net-config.rules", "This file is autogenerated by os-net-config SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}!=\"\", ATTR{phys_port_name}==\"pf*vf*\", ENV{NM_UNMANAGED}=\"1\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.0\", NAME=\"ens1f0\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e48\", ATTR{phys_port_name}==\"pf0vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f0_USDenv{NUMBER}\" SUBSYSTEM==\"net\", ACTION==\"add\", DRIVERS==\"?*\", KERNELS==\"0000:65:00.1\", NAME=\"ens1f1\" SUBSYSTEM==\"net\", ACTION==\"add\", ATTR{phys_switch_id}==\"98039b7f9e49\", ATTR{phys_port_name}==\"pf1vf*\", IMPORT{program}=\"/etc/udev/rep-link-name.sh USDattr{phys_port_name}\", NAME=\"ens1f1_USDenv{NUMBER}\"", "cl-bcmcmd l2 show", "mac=00:02:00:00:00:08 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 mac=00:02:00:00:00:09 vlan=2000 GPORT=0x2 modid=0 port=2/xe1 Hit", "tc -s filter show dev p5p1_1 ingress", "... filter block 94 protocol ip pref 3 flower chain 5 filter block 94 protocol ip pref 3 flower chain 5 handle 0x2 eth_type ipv4 src_ip 172.0.0.1 ip_flags nofrag in_hw in_hw_count 1 action order 1: mirred (Egress Redirect to device eth4) stolen index 3 ref 1 bind 1 installed 364 sec used 0 sec Action statistics: Sent 253991716224 bytes 169534118 pkt (dropped 0, overlimits 0 requeues 0) Sent software 43711874200 bytes 30161170 pkt Sent hardware 210279842024 bytes 139372948 pkt backlog 0b 0p requeues 0 cookie 8beddad9a0430f0457e7e78db6e0af48 no_percpu", "[13232.860484] mlx5_core 0000:3b:00.0: mlx5_cmd_check:756:(pid 131368): SET_FLOW_TABLE_ENTRY(0x936) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x6b1266)", "0x6B1266 | set_flow_table_entry: pop vlan and forward to uplink is not allowed", "2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5", "ovs-appctl vlog/set dpif_netlink:file:dbg Module name changed recently (check based on the version used ovs-appctl vlog/set netdev_tc_offloads:file:dbg [OR] ovs-appctl vlog/set netdev_offload_tc:file:dbg ovs-appctl vlog/set tc:file:dbg", "2020-01-31T06:22:11.218Z|00471|dpif_netlink(handler402)|DBG|system@ovs-system: put[create] ufid:61bd016e-eb89-44fc-a17e-958bc8e45fda recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(7),skb_mark(0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=fa:16:3e:d2:f5:f3,dst=fa:16:3e:c4:a3:eb),eth_type(0x0800),ipv4(src=10.1.1.8/0.0.0.0,dst=10.1.1.31/0.0.0.0,proto=1/0,tos=0/0x3,ttl=64/0,frag=no),icmp(type=0/0,code=0/0), actions:set(tunnel(tun_id=0x3d,src=10.10.141.107,dst=10.10.141.124,ttl=64,tp_dst=4789,flags(df|key))),6 2020-01-31T06:22:11.253Z|00472|netdev_tc_offloads(handler402)|DBG|offloading attribute pkt_mark isn't supported 2020-01-31T06:22:11.257Z|00473|dpif_netlink(handler402)|ERR|failed to offload flow: Operation not supported: p6p1_5", "./sysinfo-snapshot.py --asap --asap_tc --ibdiagnet --openstack", "ovs-appctl dpctl/dump-flows -m type=offloaded ovs-appctl dpctl/dump-flows -m tc filter show dev ens1_0 ingress tc -s filter show dev ens1_0 ingress tc monitor", "[tripleo-admin@compute-0 ~]USD ls -lh /sys/class/net/eno2/device/ | grep virtfn lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn0 -> ../0000:18:06.0 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn1 -> ../0000:18:06.1 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn2 -> ../0000:18:06.2 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn3 -> ../0000:18:06.3 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn4 -> ../0000:18:06.4 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn5 -> ../0000:18:06.5 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn6 -> ../0000:18:06.6 lrwxrwxrwx. 1 root root 0 Apr 16 09:58 virtfn7 -> ../0000:18:06.7", "NovaPCIPassthrough: - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[1-3]\"} - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[5]\"} - physical_network: \"sriovnet2\" address: {\"domain\": \".*\", \"bus\": \"18\", \"slot\": \"06\", \"function\": \"[7]\"}", "The MTU value of 9000 becomes 9216 bytes. The MTU value of 2000 becomes 2048 bytes.", "Memory required for 9000 MTU = (9216 + 800) * (4096*64) = 2625634304 Memory required for 2000 MTU = (2048 + 800) * (4096*64) = 746586112", "2625634304 + 746586112 + 536870912 = 3909091328 bytes.", "3909091328 / (1024*1024) = 3728 MB.", "3724 MB rounds up to 4096 MB.", "OvsDpdkSocketMemory: \"4096,1024\"", "The MTU value of 2000 becomes 2048 bytes.", "Memory required for 2000 MTU = (2048 + 800) * (4096*64) = 746586112", "746586112 + 536870912 = 1283457024 bytes.", "1283457024 / (1024*1024) = 1224 MB.", "1224 MB rounds up to 2048 MB.", "OvsDpdkSocketMemory: \"2048,1024\"", "lshw -class processor | grep pdpe1gb", "parameter_defaults: ComputeOvsDpdkSriovParameters: DdpPackage: \"ddp-comms\"", "parameter_defaults: ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=48 intel_iommu=on iommu=pt isolcpus=1-11,13-23\" IsolCpusList: \"1-11,13-23\" OvsDpdkSocketMemory: \"4096\" OvsDpdkMemoryChannels: \"4\" OvsDpdkExtra: \"-a 0000:00:00.0\" NovaReservedHostMemory: 4096 OvsPmdCoreList: \"1,13,2,14,3,15\" OvsDpdkCoreList: \"0,12\" NovaComputeCpuDedicatedSet: [ 4-11 , 16-23 ] NovaComputeCpuSharedSet: [ 0 , 12 ]", "source ~/stackrc", "cat <<EOF > /home/stack/cli-overcloud-tuned-maxpower-conf.yaml {% raw %} --- #/home/stack/cli-overcloud-tuned-maxpower-conf.yaml - name: Overcloud Node set tuned power state hosts: compute-0 compute-1 any_errors_fatal: true gather_facts: false pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 connection: local tasks: - name: Check the max power state for this system become: true block: - name: Get power states shell: \"for s in /sys/devices/system/cpu/cpu2/cpuidle/*; do grep . USDs/{name,latency}; done\" register: _list_of_power_states - name: Print available power states debug: msg: \"{{ _list_of_power_states.stdout.split('\\n') }}\" - name: Check for active tuned power-save profile stat: path: \"/etc/tuned/active_profile\" register: _active_profile - name: Check the profile slurp: path: \"/etc/tuned/active_profile\" when: _active_profile.stat.exists register: _active_profile_name - name: Print states debug: var: (_active_profile_name.content|b64decode|string) - name: Check the max power state for this system block: - name: Check if the cstate config is present in the conf file lineinfile: dest: /etc/tuned/cpu-partitioning-powersave-variables.conf regexp: '^max_power_state' line: 'max_power_state=cstate.name:C6' register: _cstate_entry_check {% endraw %} EOF", "OvsDpdkSocketMemory: \"1024,1024\"", "OvsPmdCoreList: \"2,3,10,11\" NovaComputeCpuDedicatedSet: \"4,5,6,7,12,13,14,15\"", "OvsPmdCoreList: \"2,3,4,5,10,11\" NovaComputeCpuDedicatedSet: \"6,7,12,13,14,15\"", "OvsPmdCoreList: \"2,3,10,11\" NovaComputeCpuDedicatedSet: \"4,5,6,7,12,13,14,15\"", "OvsPmdCoreList: \"2,3,10,11,12,13\" NovaComputeCpuDedicatedSet: \"4,5,6,7,14,15\"", "OvsPmdCoreList: \"2,3,4,5,10,11,12,13\" NovaComputeCpuDedicatedSet: \"6,7,14,15\"", "source ~/stackrc", "openstack overcloud roles generate -o /home/stack/templates/roles_data_compute_ovsdpdk.yaml Controller ComputeOvsDpdk", "openstack overcloud roles generate -o /home/stack/templates/ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov Compute:ComputeOvsHwOffload", "TunedProfileName: \"cpu-partitioning-powersave\"", "sed -i 's/TunedProfileName:.*USD/TunedProfileName: \"cpu-partitioning-powersave\"/' /home/stack/templates/roles_data_compute_ovsdpdk.yaml", "sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_ovsdpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dpdk.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml", "source ~/stackrc", "parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - AggregateInstanceExtraSpecsFilter", "parameter_defaults: ComputeOvsDpdkParameters: NeutronBridgeMappings: \"dpdk:br-dpdk\" KernelArgs: \"default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39\" NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"4096,4096\" OvsDpdkMemoryChannels: \"4\" OvsDpdkCoreList: \"0,20,1,21\" NovaComputeCpuDedicatedSet: \"4,6,8,10,12,14,16,18,24,26,28,30,32,34,36,38,5,7,9,11,13,15,17,19,27,29,31,33,35,37,39\" NovaComputeCpuSharedSet: \"0,20,1,21\" OvsPmdCoreList: \"2,22,3,23\"", "source ~/stackrc", "parameter_defaults: NeutronOVSFirewallDriver: openvswitch", "openstack port set --no-security-group --disable-port-security USD{PORT}", "source ~/stackrc", "- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml", "- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_isolated_cores: '2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: pmd: '2,22,3,23' memory_channels: '4' socket_mem: '4096,4096' pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\"", "- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: reboot_wait_timeout: 600 kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-11,13-23 tuned_profile: cpu-partitioning tuned_isolated_cores: 1-11,13-23 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: memory_channels: 4 lcore: 0,12 pmd: 1,13,2,14,3,15 socket_mem: 4096 dpdk_extra: -a 0000:00:00.0 disable_emc: false enable_tso: false revalidator: ' handler: ' pmd_auto_lb: false pmd_load_threshold: ' pmd_improvement_threshold: ' pmd_rebal_interval: '' nova_postcopy: true", "tuned_profile: \"cpu-partitioning-powersave\" - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/cli-overcloud-tuned-maxpower-conf.yaml - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/overcloud-nm-config.yaml extra_vars: reboot_wait_timeout: 900 pmd_sleep_max: \"50\"", "- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 tuned_isolated_cores: 2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 tuned_profile: cpu-partitioning reboot_wait_timeout: 1800 - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/cli-overcloud-tuned-maxpower-conf.yaml - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/overcloud-nm-config.yaml extra_vars: reboot_wait_timeout: 900 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: pmd: 2,22,3,23 memory_channels: 4 socket_mem: 4096,4096 pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\" pmd_sleep_max: \"50\"", "source ~/stackrc", "- type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: - set Interface dpdk0 options:n_rxq_desc=4096 - set Interface dpdk0 options:n_txq_desc=4096 - set Interface dpdk1 options:n_rxq_desc=4096 - set Interface dpdk1 options:n_txq_desc=4096 members: - type: ovs_dpdk_port name: dpdk0 driver: vfio-pci members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 driver: vfio-pci members: - type: interface name: nic6", "- name: ComputeOvsDpdk count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2", "- type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: - set Interface dpdk0 options:n_rxq_desc=4096 - set Interface dpdk0 options:n_txq_desc=4096 - set Interface dpdk1 options:n_rxq_desc=4096 - set Interface dpdk1 options:n_txq_desc=4096 members: - type: ovs_dpdk_port name: dpdk0 driver: vfio-pci members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 driver: vfio-pci members: - type: interface name: nic6", "source ~/stackrc", "parameter_defaults: # MTU global configuration NeutronGlobalPhysnetMtu: 9000", "- type: ovs_bridge name: br-link0 use_dhcp: false members: - type: interface name: nic3 mtu: 9000", "- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5", "source ~/stackrc", "- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5", "source ~/stackrc", "ansible_playbooks: ... - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: ... pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\"", "parameter_merge_strategies: ComputeOvsDpdkSriovParameters:merge ... parameter_defaults: ComputeOvsDpdkSriovParameters: ... OvsPmdAutoLb: true OvsPmdLoadThreshold: 70 OvsPmdImprovementThreshold: 25 OvsPmdRebalInterval: 2", "source ~/stackrc", "openstack overcloud deploy --log-file overcloud_deployment.log --templates /usr/share/openstack-tripleo-heat-templates/ --stack overcloud -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data_compute_ovsdpdk.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dpdk.yaml -e /home/stack/templates/ovs-dpdk-overrides.yaml", "openstack port set --no-security-group --disable-port-security USD{PORT}", "openstack aggregate create dpdk_group # openstack aggregate add host dpdk_group [compute-host] # openstack aggregate set --property dpdk=true dpdk_group", "openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>", "openstack flavor set <flavor> --property dpdk=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB --property hw:emulator_threads_policy=isolate", "openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp", "openstack image set --property hw_vif_multiqueue_enabled=true <image>", "openstack server create --flavor <flavor> --image <glance image> --nic net-id=<network ID> <server_name>", "ovs-vsctl list bridge br0 _uuid : bdce0825-e263-4d15-b256-f01222df96f3 auto_attach : [] controller : [] datapath_id : \"00002608cebd154d\" datapath_type : netdev datapath_version : \"<built-in>\" external_ids : {} fail_mode : [] flood_vlans : [] flow_tables : {} ipfix : [] mcast_snooping_enable: false mirrors : [] name : \"br0\" netflow : [] other_config : {} ports : [52725b91-de7f-41e7-bb49-3b7e50354138] protocols : [] rstp_enable : false rstp_status : {} sflow : [] status : {} stp_enable : false", "less /var/log/containers/neutron/openvswitch-agent.log", "cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20", "ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010", "tuna -t ovs-vswitchd -CP thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3161 OTHER 0 6 765023 614 ovs-vswitchd 3219 OTHER 0 6 1 0 handler24 3220 OTHER 0 6 1 0 handler21 3221 OTHER 0 6 1 0 handler22 3222 OTHER 0 6 1 0 handler23 3223 OTHER 0 6 1 0 handler25 3224 OTHER 0 6 1 0 handler26 3225 OTHER 0 6 1 0 handler27 3226 OTHER 0 6 1 0 handler28 3227 OTHER 0 6 2 0 handler31 3228 OTHER 0 6 2 4 handler30 3229 OTHER 0 6 2 5 handler32 3230 OTHER 0 6 953538 431 revalidator29 3231 OTHER 0 6 1424258 976 revalidator33 3232 OTHER 0 6 1424693 836 revalidator34 3233 OTHER 0 6 951678 503 revalidator36 3234 OTHER 0 6 1425128 498 revalidator35 *3235 OTHER 0 4 151123 51 pmd37* *3236 OTHER 0 20 298967 48 pmd38* 3164 OTHER 0 6 47575 0 dpdk_watchdog3 3165 OTHER 0 6 237634 0 vhost_thread1 3166 OTHER 0 6 3665 0 urcu2", "parameter_defaults: ComputeOvsDpdkParameters: NovaComputeCpuSharedSet: \"0-1,16-17\" NovaComputeCpuDedicatedSet: \"2-15,18-31\"", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <vcpus> <flavor>", "openstack flavor set <flavor> --property hw:emulator_threads_policy=share", "openstack server show <instance_id>", "ssh tripleo-admin@compute-1 [compute-1]USD sudo virsh dumpxml instance-00001 | grep `'emulatorpin cpuset'`", "parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2", "parameter_defaults: NeutronPhysicalDevMappings: - sriov2:p5p2 NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1572\" physical_network: \"sriov2\" trusted: \"true\"", "openstack network create trusted_vf_network --provider-network-type vlan --provider-segment 111 --provider-physical-network sriov2 --external --disable-port-security", "openstack subnet create --network trusted_vf_network --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp subnet-trusted_vf_network", "openstack port create --network sriov111 --vnic-type direct --binding-profile trusted=true sriov111_port_trusted", "openstack server create --image rhel --flavor dpdk --network internal --port trusted_vf_network_port_trusted --config-drive True --wait rhel-dpdk-sriov_trusted", "ip link 7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off", "source ~/stackrc", "parameter_defaults: NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024", "openstack overcloud deploy --templates -e <other_environment_files> -e /home/stack/my_tx-rx_queue_sizes.yaml", "egrep \"^[rt]x_queue_size\" /var/lib/config-data/puppet-generated/ nova_libvirt/etc/nova/nova.conf", "rx_queue_size=1024 tx_queue_size=1024", "openstack server show testvm-queue-sizes -c OS-EXT-SRV-ATTR: hypervisor_hostname -c OS-EXT-SRV-ATTR:instance_name", "+-------------------------------------+------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------+ | OS-EXT-SRV-ATTR:hypervisor_hostname | overcloud-novacompute-1.sales | | OS-EXT-SRV-ATTR:instance_name | instance-00000059 | +-------------------------------------+------------------------------------+", "podman exec nova_libvirt virsh dumpxml instance-00000059", "<interface type='vhostuser'> <mac address='56:48:4f:4d:5e:6f'/> <source type='unix' path='/tmp/vhost-user1' mode='server'/> <model type='virtio'/> <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024' /> <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/> </interface>", "parameter_defaults: NeutronPhysnetNUMANodesMapping: {<physnet_name>: [<NUMA_NODE>]} NeutronTunnelNUMANodes: <NUMA_NODE>,<NUMA_NODE>", "parameter_defaults: NeutronBridgeMappings: - tenant:br-link0 NeutronPhysnetNUMANodesMapping: {tenant: [1], mgmt: [0,1]} NeutronTunnelNUMANodes: 0", "ethtool -i eno2 bus-info: 0000:18:00.1 cat /sys/devices/pci0000:16/0000:16:02.0/0000:18:00.1/numa_node 0", "NeutronBridgeMappings: 'physnet1:br-physnet1' NeutronPhysnetNUMANodesMapping: {physnet1: [0] } - type: ovs_user_bridge name: br-physnet1 mtu: 9000 members: - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: eno2", "[neutron_physnet_tenant] numa_nodes=1 [neutron_tunnel] numa_nodes=1", "lscpu", "[osd] osd_numa_node = 0 # 1 osd_memory_target_autotune = true # 2 [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 # 3", "parameter_defaults: ComputeHCIParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=240 intel_iommu=on iommu=pt # 1 isolcpus=2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: # 2 \"2,46,3,47,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51, 53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87\" VhostuserSocketGroup: hugetlbfs OvsDpdkSocketMemory: \"4096,4096\" # 3 OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"2,46,3,47\" # 4", "parameter_defaults: ComputeHCIExtraConfig: nova::cpu_allocation_ratio: 16 # 2 NovaReservedHugePages: # 1 - node:0,size:1GB,count:4 - node:1,size:1GB,count:4 NovaReservedHostMemory: 123904 # 2 # All left over cpus from NUMA-1 NovaComputeCpuDedicatedSet: # 3 ['5','7','9','11','13','15','17','19','21','23','25','27','29','31','33','35','37','39','41','43','49','51','| 53','55','57','59','61','63','65','67','69','71','73','75','77','79','81','83','85','87", "openstack overcloud roles generate -o ~/<templates>/roles_data.yaml Controller ComputeHCIOvsDpdk", "openstack overcloud ceph deploy --config initial-ceph.conf", "openstack overcloud deploy --templates --timeout 360 -r ~/<templates>/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ cephadm/cephadm-rbd-only.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovs-dpdk.yaml -e ~/<templates>/<custom environment file>", "ethtool -T p5p1 Time stamping parameters for p5p1: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 6 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC) ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)", "#- OS::TripleO::Services::Timesync - OS::TripleO::Services::TimeMaster", "#Example ComputeSriovParameters: PTPInterfaces: '0:eno1,1:eno2' PTPMessageTransport: 'UDPv4'", "openstack overcloud deploy --templates ... -e <existing_overcloud_environment_files> -e <new_environment_file1> -e <new_environment_file2> ...", "phc_ctl <clock_name> get phc_ctl <clock_name> cmp", "cat /etc/timemaster.conf Configuration file for timemaster #[ntp_server ntp-server.local] #minpoll 4 #maxpoll 4 [ptp_domain 0] interfaces eno1 #ptp4l_setting network_transport l2 #delay 10e-6 [timemaster] ntp_program chronyd include /etc/chrony.conf server clock.redhat.com iburst minpoll 6 maxpoll 10 [ntp.conf] includefile /etc/ntp.conf includefile /etc/ptp4l.conf network_transport L2 [chronyd] path /usr/sbin/chronyd [ntpd] path /usr/sbin/ntpd options -u ntp:ntp -g [phc2sys] path /usr/sbin/phc2sys #options -w [ptp4l] path /usr/sbin/ptp4l #options -2 -i eno1", "systemctl status timemaster ● timemaster.service - Synchronize system clock to NTP and PTP time sources Loaded: loaded (/usr/lib/systemd/system/timemaster.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2020-08-25 19:10:18 UTC; 2min 6s ago Main PID: 2573 (timemaster) Tasks: 6 (limit: 357097) Memory: 5.1M CGroup: /system.slice/timemaster.service ├─2573 /usr/sbin/timemaster -f /etc/timemaster.conf ├─2577 /usr/sbin/chronyd -n -f /var/run/timemaster/chrony.conf ├─2582 /usr/sbin/ptp4l -l 5 -f /var/run/timemaster/ptp4l.0.conf -H -i eno1 ├─2583 /usr/sbin/phc2sys -l 5 -a -r -R 1.00 -z /var/run/timemaster/ptp4l.0.socket -t [0:eno1] -n 0 -E ntpshm -M 0 ├─2587 /usr/sbin/ptp4l -l 5 -f /var/run/timemaster/ptp4l.1.conf -H -i eno2 └─2588 /usr/sbin/phc2sys -l 5 -a -r -R 1.00 -z /var/run/timemaster/ptp4l.1.socket -t [0:eno2] -n 0 -E ntpshm -M 1 Aug 25 19:11:53 computesriov-0 ptp4l[2587]: [152.562] [0:eno2] selected local clock e4434b.fffe.4a0c24 as best master", "(undercloud) [stack@undercloud-0 ~]USD sudo dnf install libguestfs-tools", "sudo systemctl disable --now iscsid.socket", "(undercloud) [stack@undercloud-0 ~]USD tar -xf /usr/share/rhosp-director-images/overcloud-hardened-uefi-full-17.1.x86_64.tar (undercloud) [stack@undercloud-0 ~]USD tar -xf /usr/share/rhosp-director-images/ironic-python-agent-17.1.x86_64.tar", "(undercloud) [stack@undercloud-0 ~]USD cp overcloud-hardened-uefi-full.qcow2 overcloud-realtime-compute.qcow2", "virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager register --username=[username] --password=[password]' subscription-manager release --set 9.0", "sudo subscription-manager list --all --available | less virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'subscription-manager attach --pool [pool-ID]'", "virt-customize -a overcloud-realtime-compute.qcow2 --run-command 'sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=ansible-2.9-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-nfv-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms'", "(undercloud) [stack@undercloud-0 ~]USD cat <<'EOF' > rt.sh #!/bin/bash set -eux dnf -v -y --setopt=protected_packages= erase kernel.USD(uname -m) dnf -v -y install kernel-rt kernel-rt-kvm tuned-profiles-nfv-host grubby --set-default /boot/vmlinuz*rt* EOF", "(undercloud) [stack@undercloud-0 ~]USD virt-customize -a overcloud-realtime-compute.qcow2 -v --run rt.sh 2>&1 | tee virt-customize.log", "(undercloud) [stack@undercloud-0 ~]USD cat virt-customize.log | grep Verifying Verifying : kernel-3.10.0-957.el7.x86_64 1/1 Verifying : 10:qemu-kvm-tools-rhev-2.12.0-18.el7_6.1.x86_64 1/8 Verifying : tuned-profiles-realtime-2.10.0-6.el7_6.3.noarch 2/8 Verifying : linux-firmware-20180911-69.git85c5d90.el7.noarch 3/8 Verifying : tuned-profiles-nfv-host-2.10.0-6.el7_6.3.noarch 4/8 Verifying : kernel-rt-kvm-3.10.0-957.10.1.rt56.921.el7.x86_64 5/8 Verifying : tuna-0.13-6.el7.noarch 6/8 Verifying : kernel-rt-3.10.0-957.10.1.rt56.921.el7.x86_64 7/8 Verifying : rt-setup-2.0-6.el7.x86_64 8/8", "(undercloud) [stack@undercloud-0 ~]USD virt-customize -a overcloud-realtime-compute.qcow2 --selinux-relabel", "(undercloud) [stack@undercloud-0 ~]USD mkdir image (undercloud) [stack@undercloud-0 ~]USD guestmount -a overcloud-realtime-compute.qcow2 -i --ro image (undercloud) [stack@undercloud-0 ~]USD cp image/boot/vmlinuz-3.10.0-862.rt56.804.el7.x86_64 ./overcloud-realtime-compute.vmlinuz (undercloud) [stack@undercloud-0 ~]USD cp image/boot/initramfs-3.10.0-862.rt56.804.el7.x86_64.img ./overcloud-realtime-compute.initrd (undercloud) [stack@undercloud-0 ~]USD guestunmount image", "(undercloud) [stack@undercloud-0 ~]USD openstack overcloud image upload --update-existing --os-image-name overcloud-realtime-compute.qcow2", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_rt.yaml ComputeRealTime Compute Controller", "################################################### Role: ComputeRealTime # ################################################### - name: ComputeRealTime description: | Real Time Compute Node role CountDefault: 1 # Create external Neutron bridge tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computert-%index%' deprecated_nic_config_name: compute-rt.yaml", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.RTCOMPUTE <node>", "- name: Controller count: 3 - name: Compute count: 3 - name: ComputeRealTime count: 1 defaults: resource_class: baremetal.RTCOMPUTE network_config: template: /home/stack/templates/nic-config/<role_topology_file>", "RealTime KVM fix until BZ #2122949 is closed- - name: Fix RT Kernel hosts: allovercloud any_errors_fatal: true gather_facts: false vars: reboot_wait_timeout: 900 pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 tasks: - name: Fix bootloader entry become: true shell: |- set -eux new_entry=USD(grep saved_entry= /boot/grub2/grubenv | sed -e s/saved_entry=//) source /etc/default/grub sed -i \"s/options.*/options root=USDGRUB_DEVICE ro USDGRUB_CMDLINE_LINUX USDGRUB_CMDLINE_LINUX_DEFAULT/\" /boot/loader/entries/USD(</etc/machine-id)USDnew_entry.conf cp -f /boot/grub2/grubenv /boot/efi/EFI/redhat/grubenv post_tasks: - name: Configure reboot after new kernel become: true reboot: reboot_timeout: \"{{ reboot_wait_timeout }}\" when: reboot_wait_timeout is defined", "- name: ComputeOvsDpdkSriovRT ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: \"default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on tsx=off isolcpus=2-19,22-39\" reboot_wait_timeout: 900 tuned_profile: \"cpu-partitioning\" tuned_isolated_cores: \"2-19,22-39\" defer_reboot: true - playbook: /home/stack/templates/fix_rt_kernel.yaml extra_vars: reboot_wait_timeout: 1800", "(undercloud)USD openstack overcloud node provision [--stack <stack> \\ ] [--network-config \\] --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeAMDSEVNetworkConfigTemplate: /home/stack/templates/nic-configs/<rt_compute>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2", "(undercloud)USD openstack overcloud deploy --templates -r /home/stack/templates/roles_data_rt.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml -e [your environment files] -e /home/stack/templates/compute-real-time.yaml", "NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan'", "The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0", "########################## # OVS DPDK configuration # ########################## ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2-19,22-39\" NovaComputeCpuDedicatedSet: ['4-19,24-39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"3072,1024\" OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"2,22,3,23\" NovaComputeCpuSharedSet: [0,20,1,21] NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024", "NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.0\" trusted: \"true\" physical_network: \"sriov-1\" - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.1\" trusted: \"true\" physical_network: \"sriov-2\"", "openstack flavor create r1.small --id 99 --ram 4096 --disk 20 --vcpus 4 openstack flavor set --property hw:cpu_policy=dedicated 99 openstack flavor set --property hw:cpu_realtime=yes 99 openstack flavor set --property hw:mem_page_size=1GB 99 openstack flavor set --property hw:cpu_realtime_mask=\"^0-1\" 99 openstack flavor set --property hw:cpu_emulator_threads=isolate 99", "openstack server create --image <rhel> --flavor r1.small --nic net-id=<dpdk-net> test-rt", "virsh dumpxml <instance-id> | grep vcpu -A1 <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='7'/> <emulatorpin cpuset='0-1'/> <vcpusched vcpus='2-3' scheduler='fifo' priority='1'/> </cputune>", "ServicesDefault: - OS::TripleO::Services::Tuned", "NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan,vlan'", "The OVS logical->physical bridge mappings to use. NeutronBridgeMappings: - dpdk-mgmt:br-link0", "########################## # OVS DPDK configuration # ########################## ComputeOvsDpdkSriovParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=2-19,22-39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2-19,22-39\" NovaComputeCpuDedicatedSet: ['4-19,24-39'] NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"3072,1024\" OvsDpdkMemoryChannels: \"4\" OvsPmdCoreList: \"2,22,3,23\" NovaComputeCpuSharedSet: [0,20,1,21] NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024", "NovaPCIPassthrough: - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.0\" trusted: \"true\" physical_network: \"sriov-1\" - vendor_id: \"8086\" product_id: \"1528\" address: \"0000:06:00.1\" trusted: \"true\" physical_network: \"sriov-2\"", "- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic2 primary: true", "- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: StorageMgmtNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageMgmtIpSubnet - type: vlan vlan_id: get_param: ExternalNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute", "- type: ovs_bridge name: br-link0 use_dhcp: false mtu: 9000 members: - type: interface name: nic3 mtu: 9000 - type: vlan vlan_id: get_param: TenantNetworkVlanID mtu: 9000 addresses: - ip_netmask: get_param: TenantIpSubnet", "- type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4", "- type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: StorageNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: StorageIpSubnet", "- type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: - str_replace: template: set port br-link0 tag=_VLAN_TAG_ params: _VLAN_TAG_: get_param: TenantNetworkVlanID addresses: - ip_netmask: get_param: TenantIpSubnet members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8", "openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCIOvsDpdkSriov", "############################################################################### File generated by TripleO ############################################################################### ############################################################################### Role: Controller # ############################################################################### - name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 tags: - primary - controller networks: External: subnet: external_subnet InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet Tenant: subnet: tenant_subnet # For systems with both IPv4 and IPv6, you may specify a gateway network for # each, such as ['ControlPlane', 'External'] default_route_networks: ['External'] HostnameFormatDefault: '%stackname%-controller-%index%' # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' deprecated_nic_config_name: 'controller.yaml' update_serial: 1 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AodhApi - OS::TripleO::Services::AodhEvaluator - OS::TripleO::Services::AodhListener - OS::TripleO::Services::AodhNotifier - OS::TripleO::Services::AuditD - OS::TripleO::Services::BarbicanApi - OS::TripleO::Services::BarbicanBackendSimpleCrypto - OS::TripleO::Services::BarbicanBackendDogtag - OS::TripleO::Services::BarbicanBackendKmip - OS::TripleO::Services::BarbicanBackendPkcs11Crypto - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CeilometerAgentCentral - OS::TripleO::Services::CeilometerAgentNotification - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephGrafana - OS::TripleO::Services::CephMds - OS::TripleO::Services::CephMgr - OS::TripleO::Services::CephMon - OS::TripleO::Services::CephRbdMirror - OS::TripleO::Services::CephRgw - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::CinderApi - OS::TripleO::Services::CinderBackendDellPs - OS::TripleO::Services::CinderBackendDellSc - OS::TripleO::Services::CinderBackendDellEMCPowermax - OS::TripleO::Services::CinderBackendDellEMCPowerStore - OS::TripleO::Services::CinderBackendDellEMCSc - OS::TripleO::Services::CinderBackendDellEMCUnity - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI - OS::TripleO::Services::CinderBackendDellEMCVNX - OS::TripleO::Services::CinderBackendDellEMCVxFlexOS - OS::TripleO::Services::CinderBackendDellEMCXtremio - OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI - OS::TripleO::Services::CinderBackendNetApp - OS::TripleO::Services::CinderBackendPure - OS::TripleO::Services::CinderBackendScaleIO - OS::TripleO::Services::CinderBackendVRTSHyperScale - OS::TripleO::Services::CinderBackendNVMeOF - OS::TripleO::Services::CinderBackup - OS::TripleO::Services::CinderHPELeftHandISCSI - OS::TripleO::Services::CinderScheduler - OS::TripleO::Services::CinderVolume - OS::TripleO::Services::Clustercheck - OS::TripleO::Services::Collectd - OS::TripleO::Services::ContainerImagePrepare - OS::TripleO::Services::DesignateApi - OS::TripleO::Services::DesignateCentral - OS::TripleO::Services::DesignateProducer - OS::TripleO::Services::DesignateWorker - OS::TripleO::Services::DesignateMDNS - OS::TripleO::Services::DesignateSink - OS::TripleO::Services::Docker - OS::TripleO::Services::Ec2Api - OS::TripleO::Services::Etcd - OS::TripleO::Services::ExternalSwiftProxy - OS::TripleO::Services::GlanceApi - OS::TripleO::Services::GnocchiApi - OS::TripleO::Services::GnocchiMetricd - OS::TripleO::Services::GnocchiStatsd - OS::TripleO::Services::HAproxy - OS::TripleO::Services::HeatApi - OS::TripleO::Services::HeatApiCloudwatch - OS::TripleO::Services::HeatApiCfn - OS::TripleO::Services::HeatEngine - OS::TripleO::Services::Horizon - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::IronicApi - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::IronicInspector - OS::TripleO::Services::IronicPxe - OS::TripleO::Services::IronicNeutronAgent - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Keepalived - OS::TripleO::Services::Kernel - OS::TripleO::Services::Keystone - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::ManilaApi - OS::TripleO::Services::ManilaBackendCephFs - OS::TripleO::Services::ManilaBackendIsilon - OS::TripleO::Services::ManilaBackendNetapp - OS::TripleO::Services::ManilaBackendUnity - OS::TripleO::Services::ManilaBackendVNX - OS::TripleO::Services::ManilaBackendVMAX - OS::TripleO::Services::ManilaScheduler - OS::TripleO::Services::ManilaShare - OS::TripleO::Services::Memcached - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MistralApi - OS::TripleO::Services::MistralEngine - OS::TripleO::Services::MistralExecutor - OS::TripleO::Services::MistralEventEngine - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQL - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronApi - OS::TripleO::Services::NeutronBgpVpnApi - OS::TripleO::Services::NeutronSfcApi - OS::TripleO::Services::NeutronCorePlugin - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronL2gwAgent - OS::TripleO::Services::NeutronL2gwApi - OS::TripleO::Services::NeutronL3Agent - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronMetadataAgent - OS::TripleO::Services::NeutronML2FujitsuCfab - OS::TripleO::Services::NeutronML2FujitsuFossw - OS::TripleO::Services::NeutronOvsAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NeutronAgentsIBConfig - OS::TripleO::Services::NovaApi - OS::TripleO::Services::NovaConductor - OS::TripleO::Services::NovaIronic - OS::TripleO::Services::NovaMetadata - OS::TripleO::Services::NovaScheduler - OS::TripleO::Services::NovaVncProxy - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OctaviaApi - OS::TripleO::Services::OctaviaDeploymentConfig - OS::TripleO::Services::OctaviaHealthManager - OS::TripleO::Services::OctaviaHousekeeping - OS::TripleO::Services::OctaviaWorker - OS::TripleO::Services::OpenStackClients - OS::TripleO::Services::OVNDBs - OS::TripleO::Services::OVNController - OS::TripleO::Services::Pacemaker - OS::TripleO::Services::PankoApi - OS::TripleO::Services::PlacementApi - OS::TripleO::Services::OsloMessagingRpc - OS::TripleO::Services::OsloMessagingNotify - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Redis - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::SaharaApi - OS::TripleO::Services::SaharaEngine - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::SwiftProxy - OS::TripleO::Services::SwiftDispersion - OS::TripleO::Services::SwiftRingBuilder - OS::TripleO::Services::SwiftStorage - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::Zaqar ############################################################################### Role: ComputeHCIOvsDpdkSriov # ############################################################################### - name: ComputeHCIOvsDpdkSriov description: | ComputeOvsDpdkSriov Node role hosting Ceph OSD too networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet StorageMgmt: subnet: storage_mgmt_subnet # CephOSD present so serial has to be 1 update_serial: 1 RoleParametersDefault: TunedProfileName: \"cpu-partitioning\" VhostuserSocketGroup: \"hugetlbfs\" NovaLibvirtRxQueueSize: 1024 NovaLibvirtTxQueueSize: 1024 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CephOSD - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsDpdk - OS::TripleO::Services::Docker - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::Multipathd - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronSriovAgent - OS::TripleO::Services::NeutronSriovHostConfig - OS::TripleO::Services::NovaAZConfig - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::OvsDpdkNetcontrold - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::Podman - OS::TripleO::Services::Rear - OS::TripleO::Services::Rhsm - OS::TripleO::Services::Rsyslog - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent - OS::TripleO::Services::Ptp", "--- parameter_defaults: # The tunnel type for the tenant network (geneve or vlan). Set to '' to disable tunneling. NeutronTunnelTypes: \"geneve\" # The tenant network type for Neutron (vlan or geneve). NeutronNetworkType: [\"geneve\", \"vlan\"] NeutronExternalNetworkBridge: \"'br-access'\" # NTP server configuration. # NtpServer: [\"clock.redhat.com\"] # MTU global configuration NeutronGlobalPhysnetMtu: 9000 # Configure the classname of the firewall driver to use for implementing security groups. NeutronOVSFirewallDriver: openvswitch SshServerOptionsOverrides: UseDns: \"no\" # Enable log level DEBUG for supported components Debug: true # From Rocky live migration with NumaTopologyFilter disabled by default # https://bugs.launchpad.net/nova/+bug/1289064 NovaEnableNUMALiveMigration: true NeutronPluginExtensions: \"port_security,qos,segments,trunk,placement\" # RFE https://bugzilla.redhat.com/show_bug.cgi?id=1669584 NeutronServicePlugins: \"ovn-router,trunk,qos,placement\" NeutronSriovAgentExtensions: \"qos\" ############################ # Scheduler configuration # ############################ NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter - AggregateInstanceExtraSpecsFilter ComputeOvsDpdkSriovNetworkConfigTemplate: \"/home/stack/ospd-17.0-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/nic-configs/computeovsdpdksriov.yaml\" ControllerSriovNetworkConfigTemplate: \"/home/stack/ospd-17.0-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/nic-configs/controller.yaml\"", "--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks if network not in 'Tenant,External' %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: - ip_netmask: 169.254.169.254/32 next_hop: {{ ctlplane_ip }} - type: linux_bond name: bond_api mtu: {{ min_viable_mtu }} bonding_options: mode=active-backup use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic2 primary: true - type: interface name: nic3 {% for network in role_networks if network not in 'Tenant,External' %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} device: bond_api vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} {% endfor %} - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: interface name: nic4 mtu: 9000 - type: vlan vlan_id: {{ lookup('vars', networks_lower['Tenant'] ~ '_vlan_id') }} mtu: 9000 addresses: - ip_netmask: {{ lookup('vars', networks_lower['Tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['Tenant'] ~ '_cidr') }} - type: ovs_bridge name: br-ex use_dhcp: false mtu: 9000 members: - type: interface name: nic5 mtu: 9000 - type: vlan vlan_id: {{ lookup('vars', networks_lower['External'] ~ '_vlan_id') }} mtu: 9000 addresses: - ip_netmask: {{ lookup('vars', networks_lower['External'] ~ '_ip') }}/{{ lookup('vars', networks_lower['External'] ~ '_cidr') }} routes: - default: true next_hop: {{ lookup('vars', networks_lower['External'] ~ '_gateway_ip') }}", "--- {% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks if network not in 'Tenant,External' %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: - ip_netmask: 169.254.169.254/32 next_hop: {{ ctlplane_ip }} - default: true next_hop: {{ ctlplane_gateway_ip }} - type: linux_bond name: bond_api mtu: {{ min_viable_mtu }} bonding_options: mode=active-backup use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic2 primary: true - type: interface name: nic3 {% for network in role_networks if network not in 'Tenant,External' %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} device: bond_api vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} {% endfor %} - type: ovs_user_bridge name: br-link0 use_dhcp: false ovs_extra: \"set port br-link0 tag={{ lookup('vars', networks_lower['Tenant'] ~ '_vlan_id') }}\" addresses: - ip_netmask: {{ lookup('vars', networks_lower['Tenant'] ~ '_ip') }}/{{ lookup('vars', networks_lower['Tenant'] ~ '_cidr')}} members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: 1 ovs_extra: \"set port dpdkbond0 bond_mode=balance-slb\" members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5", "#!/bin/bash tht_path='/home/stack/ospd-17.0-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid' [[ ! -d \"USDtht_path/roles\" ]] && mkdir USDtht_path/roles openstack overcloud roles generate -o USDtht_path/roles/roles_data.yaml ControllerSriov ComputeOvsDpdkSriov openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates --ntp-server clock.redhat.com,time1.google.com,time2.google.com,time3.google.com,time4.google.com --stack overcloud --roles-file USDtht_path/roles/roles_data.yaml -n USDtht_path/network/network_data_v2.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-sriov.yaml -e /home/stack/containers-prepare-parameter.yaml -e USDtht_path/network-environment-overrides.yaml -e USDtht_path/api-policies.yaml -e USDtht_path/bridge-mappings.yaml -e USDtht_path/neutron-vlan-ranges.yaml -e USDtht_path/dpdk-config.yaml -e USDtht_path/sriov-config.yaml --log-file overcloud_deployment.log" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/configuring_network_functions_virtualization/index
Chapter 9. Important links
Chapter 9. Important links Red Hat AMQ Broker 7.10 Release Notes Red Hat AMQ Broker 7.9 Release Notes Red Hat AMQ Broker 7.8 Release Notes Red Hat AMQ Broker 7.7 Release Notes Red Hat AMQ Broker 7.6 Release Notes Red Hat AMQ Broker 7.1 to 7.5 Release Notes (aggregated) Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2024-10-17 16:50:12 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/links
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_inside/1.3/html/red_hat_ansible_inside_installation_guide/providing-feedback
Chapter 5. Monitoring the cluster
Chapter 5. Monitoring the cluster The monitoring functions of the dashboard provide different web pages which update regularly to indicate various aspects of the storage cluster. You can monitor the overall state of the cluster using the landing page, or you can monitor specific functions of the cluster, like the state of block device images. Additional Resources For more information, see Accessing the landing page in the Dashboard guide . For more information, see Understanding the landing page in the Dashboard guide . For more information, see Monitoring specific functions in the Dashboard guide . 5.1. Accessing the landing page After you log in to the dashboard, the landing page loads. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Procedure Log in to the Dashboard: After you log in to the dashboard, the landing page loads: To return to the landing page after viewing other dashboard pages, click Dashboard towards the top left corner of the page: Additional Resources For more information, see Understanding the landing page in the Dashboard guide . For more information, see Monitoring specific functions in the Dashboard guide . 5.2. Understanding the landing page The landing page displays an overview of the entire Ceph cluster using individual panels. Each panel displays specific information about the state of the cluster. Categories The landing page orgnanizes panels into the following three categories: Status Capacity Performance Status panels The status panels display the health of the cluster and host and daemon states. Cluster Status : Displays the current health status of the Ceph cluster. Hosts : Displays the total number of hosts in the Ceph storage cluster. Monitors : Displays the number of Ceph Monitors and the quorum status. OSDs : Displays the total number of OSDs in the Ceph Storage cluster and the number that are up , and in . Managers : Displays the number and status of the Manager Daemons. Object Gateways : Displays the number of Object Gateways in the Ceph storage cluster. Metadata Servers : Displays the number and status of metadata servers for Ceph Filesystems. iSCSI Gateways : Displays the number of iSCSI Gateways in the Ceph storage cluster. Capacity panels The capacity panels display storage usage metrics. Raw Capacity : Displays the utilization and availability of the raw storage capacity of the cluster. Objects : Displays the total number of Objects in the pools and a graph dividing objects into states of Healthy , Misplaced , Degraded , or Unfound . PG Status : Displays the total number of Placement Groups and a graph dividing PGs into states of Clean , Working , Warning , or Unknown . To simplify display of PG states Working and Warning actually each encompass multiple states. The Working state includes PGs with any of these states: activating backfill_wait backfilling creating deep degraded forced_backfill forced_recovery peering peered recovering recovery_wait repair scrubbing snaptrim snaptrim_wait The Warning state includes PGs with any of these states: backfill_toofull backfill_unfound down incomplete inconsistent recovery_toofull recovery_unfound remapped snaptrim_error stale undersized Pools : Displays the number of storage pools in the Ceph cluster. PGs per OSD : Displays the number of Placement Groups per OSD. Performance panels The performance panels display information related to data transfer speeds. Client Read/Write : Displays total input/output opreations per second, reads per second, and writes per second. Client Throughput : Displays total client throughput, read throughput, and write throughput. Recovery Throughput : Displays the Client recovery rate. Scrubbing : Displays whether Ceph is scrubbing data to verify its integrity. Additional Resources For more information, see Accessing the landing page in the Dashboard guide . For more information, see Monitoring specific functions in the Dashboard guide .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/dashboard_guide/monitoring-the-cluster_dash
Chapter 1. About the Assisted Installer
Chapter 1. About the Assisted Installer The Assisted Installer for Red Hat OpenShift Container Platform is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports various deployment platforms with a focus on bare metal, Nutanix, vSphere, and Oracle Cloud Infrastructure. The Assisted Installer also supports various CPU architectures, including x86_64, s390x (IBM Z(R)), arm64, and ppc64le (IBM Power(R)). You can install OpenShift Container Platform on premises in a connected environment, with an optional HTTP/S proxy, for the following platforms: Highly available OpenShift Container Platform or single-node OpenShift cluster OpenShift Container Platform on bare metal or vSphere with full platform integration, or other virtualization platforms without integration Optionally, OpenShift Virtualization and Red Hat OpenShift Data Foundation 1.1. Features The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following features: Web interface You can install your cluster by using the Hybrid Cloud Console instead of creating installation configuration files manually. No bootstrap node You do not need a bootstrap node because the bootstrapping process runs on a node within the cluster. Streamlined installation workflow You do not need in-depth knowledge of OpenShift Container Platform to deploy a cluster. The Assisted Installer provides reasonable default configurations. You do not need to run the OpenShift Container Platform installer locally. You have access to the latest Assisted Installer for the latest tested z-stream releases. Advanced networking options The Assisted Installer supports IPv4 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy. OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later. SDN is supported up to OpenShift Container Platform 4.14. SDN supports IPv4 only. Preinstallation validation Before installing, the Assisted Installer checks the following configurations: Network connectivity Network bandwidth Connectivity to the registry Upstream DNS resolution of the domain name Time synchronization between cluster nodes Cluster node hardware Installation configuration parameters REST API You can automate the installation process by using the Assisted Installer REST API. 1.2. Customizing your installation by using Operators You can customize your deployment by selecting one or more Operators, either during the installation or afterward. Operators are used to package, deploy, and manage services and applications. This section presents the supported Assisted Installer Operators, together with their prerequisites and limitations. Important The additional requirements specified below apply to each Operator individually. If you select more than one Operator, or if the Assisted Installer automatically selects an Operator due to dependencies, the total required resources is the sum of the resource requirements for each Operator. For instructions on installing and modifying the Assisted Installer Operators, see the following sections: Installing Operators by using the web console . Installing Operators by using the API . Modifying Operators by using the API . 1.2.1. OpenShift Virtualization You can deploy OpenShift Virtualization to perform the following tasks: Create and manage Linux and Windows virtual machines (VMs). Run pod and VM workloads alongside each other in a cluster. Connect to VMs through a variety of consoles and CLI tools. Import and clone existing VMs. Manage network interface controllers and storage drives attached to VMs. Live migrate VMs between nodes. Prerequisites Requires enabled CPU virtualization support in the firmware on all nodes. Each worker node requires an additional 360 MiB of memory and 2 CPU cores. Each control plane node requires an additional 150 MiB of memory and 4 CPU cores. Requires Red Hat OpenShift Data Foundation (recommended for creating additional on-premise clusters), Logical Volume Manager Storage, or another persistent storage service. Important Deploying OpenShift Virtualization without Red Hat OpenShift Data Foundation results in the following scenarios: Multi-node cluster: No storage is configured. You must configure storage after the OpenShift Data Foundation configuration. Single-node OpenShift: Logical Volume Manager Storage (LVM Storage) is installed. You must review the prerequisites to ensure that your environment has sufficient additional resources for OpenShift Virtualization. Additional resources OpenShift Virtualization product overview . Getting started with OpenShift Virtualization . 1.2.2. Migration Toolkit for Virtualization When creating a new OpenShift cluster in the Assisted Installer, you can enable the Migration Toolkit for Virtualization (MTV) Operator. The Migration Toolkit for Virtualization Operator allows you to migrate virtual machines at scale to Red Hat OpenShift Virtualization from the following source providers: VMware vSphere Red Hat Virtualization (RHV) Red Hat OpenShift Virtualization OpenStack You can migrate to a local or a remote OpenShift Virtualization cluster. When you select the Migration Toolkit for Virtualization Operator, the Assisted Installer automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator. Prerequisites Requires OpenShift Container Platform version 4.14 or later. Requires an x86_64 CPU architecture. Requires an additional 1024 MiB of memory and 1 CPU core for each control plane node and worker node. Requires the additional resources specified for the OpenShift Virtualization Operator, installed together with OpenShift Virtualization. For details, see the prerequisites in the 'OpenShift Virtualization Operator' section. Post-installation steps After completing the installation, the Migration menu appears in the navigation pane of the Red Hat OpenShift web console. The Migration menu provides access to the Migration Toolkit for Virtualization. Use the toolkit to create and execute a migration plan with the relevant source and destination providers. For details, see either of the following chapters in the Migration Toolkit for Virtualization Guide: Migrating virtual machines by using the OpenShift Container Platform web console . Migrating virtual machines from the command line . 1.2.3. Multicluster engine for Kubernetes You can deploy the multicluster engine for Kubernetes to perform the following tasks in a large, multi-cluster environment: Provision and manage additional Kubernetes clusters from your initial cluster. Use hosted control planes to reduce management costs and optimize cluster deployment by decoupling the control and data planes. Use GitOps Zero Touch Provisioning to manage remote edge sites at scale. You can deploy the multicluster engine with OpenShift Data Foundation on all OpenShift Container Platform clusters. Prerequisites Each worker node requires an additional 16384 MiB of memory and 4 CPU cores. Each control plane node requires an additional 16384 MiB of memory and 4 CPU cores. Requires OpenShift Data Foundation (recommended for creating additional on-premise clusters), LVM Storage, or another persistent storage service. Important Deploying multicluster engine without OpenShift Data Foundation results in the following scenarios: Multi-node cluster: No storage is configured. You must configure storage after the installation process. Single-node OpenShift: LVM Storage is installed. You must review the prerequisites to ensure that your environment has sufficient additional resources for the multicluster engine. Prerequisites About the multicluster engine Operator . Red Hat OpenShift Cluster Manager documentation 1.2.4. Logical Volume Manager Storage You can use LVM Storage to dynamically provision block storage on a limited resources cluster. Prerequisites Requires at least 1 non-boot drive per host. Requires 100 MiB of additional RAM. Requires 1 additional CPU core for each non-boot drive. Additional resources Persistent storage using Logical Volume Manager Storage . Logical Volume Manager Storage documentation 1.2.5. Red Hat OpenShift Data Foundation You can use OpenShift Data Foundation for file, block, and object storage. This storage option is recommended for all OpenShift Container Platform clusters. OpenShift Data Foundation requires a separate subscription. Prerequisites There are at least 3 compute (workers) nodes, each with 19 additional GiB of memory and 8 additional CPU cores. There are at least 2 drives per compute node. For each drive, there is an additional 5 GB of RAM. You comply to the additional requirements specified here: Planning your deployment . Additional resources OpenShift Data Foundation datasheet . OpenShift Data Foundation documentation . 1.2.6. OpenShift Artificial Intelligence (AI) Red Hat(R) OpenShift(R) Artificial Intelligence (AI) is a flexible, scalable artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments. Red Hat(R) OpenShift(R) AI enables the following functionality: Data acquisition and preparation. Model training and fine-tuning. Model serving and model monitoring. Hardware acceleration. The OpenShift AI Operator enables you to install Red Hat(R) OpenShift(R) AI on your OpenShift Container Platform cluster. From OpenShift Container Platform version 4.17 and later, you can use the Assisted Installer to deploy the OpenShift AI Operator to your cluster during the installation. For the developer preview, installing the OpenShift AI Operator automatically installs the following Operators: Red Hat OpenShift Data Foundation (in this section) Node Feature Discovery Operator Nvidia GPU Operator OpenShift Container Platform Pipelines Operator OpenShift Container Platform Service Mesh Operator OpenShift Container Platform Serverless Operator Authorino (Kubernetes) Important The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Prerequisites You are installing OpenShift Container Platform version 4.17 or later. For the OpenShift AI Operator, you meet the following miminum requirements: There are at least 2 compute (worker) nodes, each with 32 additional GiB of memory and 8 additional CPU cores. There is at least 1 supported GPU. Currently only NVIDIA GPUs are supported. Nodes that have NVIDIA GPUs installed have Secure Boot disabled. For the dependent OpenShift Data Foundation Operator, you meet the minimum additional requirements specified for that Operator in this section. You meet the additional requirements specified here: Requirements for OpenShift AI . Additional resources Red Hat(R) OpenShift(R) AI 1.2.7. Additional resources Working with Operators in OpenShift Container Platform . Introduction to hosted control planes . Configure and deploy OpenShift Container Platform clusters at the network edge . 1.3. OpenShift Container Platform host architecture: control plane and compute nodes The OpenShift Container Platform architecture allows you to select a standard Kubernetes role for each of the discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or compute (worker) . 1.3.1. About assigning roles to hosts During the installation process, you can select a role for a host or configure the Assisted Installer to assign it for you. The options are as follows: Control plane (master) node - The control plane nodes run the services that are required to control the cluster, including the API server. The control plane schedules workloads, maintains cluster state, and ensures stability. Control plane nodes are also known as master nodes. Compute (worker) node - The compute nodes are responsible for executing workloads for cluster users. Compute nodes advertise their capacity, so that the control plane scheduler can identify suitable compute nodes for running pods and containers. Compute nodes are also known as worker nodes. Auto-assign - This option allows the Assisted Installer to automatically select a role for each of the hosts, based on detected hardware and network latency. You can change the role at any time before installation starts. To assign a role to a host, see either of the following sections: Configuring hosts (Web console), step 4 Assigning roles to hosts (Web console and API) 1.3.2. About specifying the number of control plane nodes for your cluster Using a higher number of control plane (master) nodes boosts fault tolerance and availability, minimizing downtime during failures. All versions of OpenShift Container Platform support one or three control plane nodes, where one control plane node is a Single-node OpenShift cluster. From OpenShift Container Platform version 4.18 and higher, the Assisted Installer also supports four or five control plane nodes on a bare metal or user-managed networking platform with an x86_64 architecture. An implementation can support any number of compute nodes. To define the required number of control plane nodes, see either of the following sections: Setting the cluster details (web console), step 12 Registering a new cluster (API), step 2 1.3.3. About scheduling workloads on control plane nodes Scheduling workloads to run on control plane nodes improves efficiency and maximizes resource utilization. You can enable this option during installation setup or as a postinstallation step. Use the following guidelines to determine when to use this feature: single-node OpenShift or small clusters (up to four nodes): The system schedules workloads on control plane nodes by default. This setting cannot be changed. Medium clusters (five to ten nodes): Scheduling workloads to run on control plane nodes in addition to worker nodes is the recommended configuration. Large clusters (more than ten nodes): Configuring control plane nodes as schedulable is not recommended. For instructions on configuring control plane nodes as schedulable during the installation setup, see the following sections: Adding hosts to the cluster (web console), step 2 . Scheduling workloads to run on control plane nodes (API) . For instructions on configuring schedulable control plane nodes following an installation, see Configuring control plane nodes as schedulable in the OpenShift Container Platform documentation. Important When you configure control plane nodes to be schedulable for workloads, an additional subscription is required for each control plane node that function as a compute (worker) node. 1.3.4. Role-related configuration validations The Assisted Installer monitors the number of hosts as one of the conditions for proceding through the cluster installation stages. The logic for determining when a cluster has a sufficient number of installed hosts to proceed is as follows: The number of control plane (master) nodes to install must match the number of control plane nodes that the user requests. For compute (worker) nodes, the requirement depends on the number of compute nodes that the user requests: If the user requests fewer than two compute nodes, the Assisted Installer accepts any number of installed compute nodes, because the control plane nodes remain schedulable for workloads. If the user requests two or more compute nodes, the Assisted Installer installs at least two compute nodes, ensuring that the control plane nodes are not schedulable for workloads. For details, see "About scheduling workloads on control plane nodes" in this section. This logic ensures that the cluster reaches a stable and expected state before continuing with the installation process. 1.3.5. Additional resources For detailed information on control plane and compute nodes, see OpenShift Container Platform architecture . 1.4. API support policy Assisted Installer APIs are supported for a minimum of three months from the announcement of deprecation.
null
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/about-ai
6.3.3. Use a Boot Option to Specify a Driver Update Disk
6.3.3. Use a Boot Option to Specify a Driver Update Disk Important This method only works to introduce completely new drivers, not to update existing drivers. Type linux dd at the boot prompt at the start of the installation process and press Enter . The installer prompts you to confirm that you have a driver disk: Figure 6.6. The driver disk prompt Insert the driver update disk that you created on CD, DVD, or USB flash drive and select Yes . The installer examines the storage devices that it can detect. If there is only one possible location that could hold a driver disk (for example, the installer detects the presence of a DVD drive, but no other storage devices) it will automatically load any driver updates that it finds at this location. If the installer finds more than one location that could hold a driver update, it prompts you to specify the location of the update. See Section 6.4, "Specifying the Location of a Driver Update Image File or a Driver Update Disk" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Driver_updates-Use_a_boot_option_to_specify_a_driver_update_disk-x86
Chapter 6. Configuring kernel parameters permanently by using the kernel_settings RHEL System Role
Chapter 6. Configuring kernel parameters permanently by using the kernel_settings RHEL System Role As an experienced user with good knowledge of Red Hat Ansible, you can use the kernel_settings role to configure kernel parameters on multiple clients at once. This solution: Provides a friendly interface with efficient input setting. Keeps all intended kernel parameters in one place. After you run the kernel_settings role from the control machine, the kernel parameters are applied to the managed systems immediately and persist across reboots. Important Note that RHEL System Role delivered over RHEL channels are available to RHEL customers as an RPM package in the default AppStream repository. RHEL System Role are also available as a collection to customers with Ansible subscriptions over Ansible Automation Hub. 6.1. Introduction to the kernel_settings role RHEL System Roles is a set of roles that provide a consistent configuration interface to remotely manage multiple systems. RHEL System Roles were introduced for automated configurations of the kernel using the kernel_settings System Role. The rhel-system-roles package contains this system role, and also the reference documentation. To apply the kernel parameters on one or more systems in an automated fashion, use the kernel_settings role with one or more of its role variables of your choice in a playbook. A playbook is a list of one or more plays that are human-readable, and are written in the YAML format. You can use an inventory file to define a set of systems that you want Ansible to configure according to the playbook. With the kernel_settings role you can configure: The kernel parameters using the kernel_settings_sysctl role variable Various kernel subsystems, hardware devices, and device drivers using the kernel_settings_sysfs role variable The CPU affinity for the systemd service manager and processes it forks using the kernel_settings_systemd_cpu_affinity role variable The kernel memory subsystem transparent hugepages using the kernel_settings_transparent_hugepages and kernel_settings_transparent_hugepages_defrag role variables Additional resources README.md and README.html files in the /usr/share/doc/rhel-system-roles/kernel_settings/ directory Working with playbooks How to build your inventory 6.2. Applying selected kernel parameters using the kernel_settings role Follow these steps to prepare and apply an Ansible playbook to remotely configure kernel parameters with persisting effect on multiple managed operating systems. Prerequisites You have root permissions. Entitled by your RHEL subscription, you installed the ansible-core and rhel-system-roles packages on the control machine. An inventory of managed hosts is present on the control machine and Ansible is able to connect to them. Procedure Optionally, review the inventory file for illustration purposes: The file defines the [testingservers] group and other groups. It allows you to run Ansible more effectively against a specific set of systems. Create a configuration file to set defaults and privilege escalation for Ansible operations. Create a new YAML file and open it in a text editor, for example: Insert the following content into the file: The [defaults] section specifies a path to the inventory file of managed hosts. The [privilege_escalation] section defines that user privileges be shifted to root on the specified managed hosts. This is necessary for successful configuration of kernel parameters. When Ansible playbook is run, you will be prompted for user password. The user automatically switches to root by means of sudo after connecting to a managed host. Create an Ansible playbook that uses the kernel_settings role. Create a new YAML file and open it in a text editor, for example: This file represents a playbook and usually contains an ordered list of tasks, also called plays , that are run against specific managed hosts selected from your inventory file. Insert the following content into the file: The name key is optional. It associates an arbitrary string with the play as a label and identifies what the play is for. The hosts key in the play specifies the hosts against which the play is run. The value or values for this key can be provided as individual names of managed hosts or as groups of hosts as defined in the inventory file. The vars section represents a list of variables containing selected kernel parameter names and values to which they have to be set. The roles key specifies what system role is going to configure the parameters and values mentioned in the vars section. Note You can modify the kernel parameters and their values in the playbook to fit your needs. Optionally, verify that the syntax in your play is correct. This example shows the successful verification of a playbook. Execute your playbook. # ansible-playbook kernel-roles.yml ... BECOME password: PLAY [Configure kernel settings] ********************************************************************************** PLAY RECAP ******************************************************************************************************** [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 Before Ansible runs your playbook, you are going to be prompted for your password and so that a user on managed hosts can be switched to root , which is necessary for configuring kernel parameters. The recap section shows that the play finished successfully ( failed=0 ) for all managed hosts, and that 4 kernel parameters have been applied ( changed=4 ). Restart your managed hosts and check the affected kernel parameters to verify that the changes have been applied and persist across reboots. Additional resources Preparing a control node and managed nodes to use RHEL System Roles README.html and README.md files in the /usr/share/doc/rhel-system-roles/kernel_settings/ directory Build Your Inventory Configuring Ansible Working With Playbooks Using Variables Roles
[ "cat /home/jdoe/< ansible_project_name >/inventory [testingservers] [email protected] [email protected] [db-servers] db1.example.com db2.example.com [webservers] web1.example.com web2.example.com 192.0.2.42", "vi /home/jdoe/< ansible_project_name >/ansible.cfg", "[defaults] inventory = ./inventory [privilege_escalation] become = true become_method = sudo become_user = root become_ask_pass = true", "vi /home/jdoe/< ansible_project_name >/kernel-roles.yml", "--- - hosts: testingservers name: \"Configure kernel settings\" roles: - rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise", "ansible-playbook --syntax-check kernel-roles.yml playbook: kernel-roles.yml", "ansible-playbook kernel-roles.yml BECOME password: PLAY [Configure kernel settings] ********************************************************************************** PLAY RECAP ******************************************************************************************************** [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 [email protected] : ok=10 changed=4 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/configuring-kernel-parameters-permanently-by-using-the-kernel-settings-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
Chapter 8. Postinstallation network configuration
Chapter 8. Postinstallation network configuration After installing OpenShift Container Platform, you can further expand and customize your network to your requirements. 8.1. Using the Cluster Network Operator You can use the Cluster Network Operator (CNO) to deploy and manage cluster network components on an OpenShift Container Platform cluster, including the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform . 8.2. Network configuration tasks Configuring the cluster-wide proxy Configuring ingress cluster traffic overview Configuring the node port service range Configuring IPsec encryption Create a network policy or configure multitenant isolation with network policies Optimizing routing Configuration for an additional network attachment 8.2.1. Creating default network policies for a new project As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project. 8.2.1.1. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 8.2.1.2. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network plugin that supports NetworkPolicy objects, such as the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s
[ "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/postinstallation_configuration/post-install-network-configuration