title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 7. Managing attribute encryption | Chapter 7. Managing attribute encryption Directory Server offers a number of mechanisms to secure access to sensitive data in the directory. However, by default, the server stores data unencrypted in the database. For highly sensitive information, the potential risk that an attacker could gain access to the database, can be a significant risk. The attribute encryption feature enables administrators to store specific attributes with sensitive data, such as government identification numbers, encrypted in the database. When enabled for a suffix, every instance of these attributes, even the index data, is encrypted for every entry stored in this attribute in the database. Note that you can enable attribute encryption for suffixes. To enable this feature for the whole server, you must enable attribute encryption for each suffix on the server. Attribute encryption is fully compatible with eq and pres indexing. Important Any attribute you use within the entry distinguished name (DN) cannot be efficiently encrypted. For example, if you have configured to encrypt the uid attribute, the value is encrypted in the entry, but not in the DN: dn: uid=demo_user,ou=People,dc=example,dc=com ... uid::Sf04P9nJWGU1qiW9JJCGRg== 7.1. Keys Directory Server uses for attribute encryption To use attribute encryption, you must configure encrypted connections using TLS. Directory Server uses the server's TLS encryption key and the same PIN input methods for attribute encryption. The server uses randomly generated symmetric cipher keys to encrypt and decrypt attribute data. The server wraps these keys using the public key from the server's TLS certificate. As a consequence, the effective strength of the attribute encryption cannot be higher than the strength of the server's TLS key. Warning Without access to the server's private key, it is not possible to recover the symmetric keys from the wrapped copies. Therefore, back up the server's certificate database regularly. If you lose the key, you will no longer be able to decrypt and encrypt data stored in the database. 7.2. Enabling attribute encryption using the command line This procedure demonstrates how to enable attribute encryption for the telephoneNumber attribute in the userRoot database using the command line. After you perform the procedure, the server stores existing and new values of this attribute AES-encrypted. Prerequisites You have enabled TLS encryption in Directory Server. Procedure Export the userRoot database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend export -E userRoot The server stores the export in an LDIF file in the /var/lib/dirsrv/ slapd-instance_name /ldif/ directory. The -E option decrypts attributes that are already encrypted during the export. Enable AES encryption for the telephoneNumber attribute: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend attr-encrypt --add-attr telephoneNumber dc=example,dc=com Stop the instance: # dsctl instance_name stop Import the LDIF file: # dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif The --encrypted parameter enables the script to encrypt attributes configured for encryption during the import. Start the instance: # dsctl instance_name start Additional resources Enabling TLS-encrypted connections to Directory Server 7.3. Enabling attribute encryption using the web console This procedure demonstrates how to enable attribute encryption for the telephoneNumber attribute in the userRoot database using the web console. After you perform the procedure, the server stores existing and new values of this attribute AES-encrypted. Note that the export and import features in the web console do not support encrypted attributes. Therefore, you must perform these steps on the command line. Prerequisites You have enabled TLS encryption in Directory Server. You are logged in to the instance in the web console. Procedure Export the userRoot database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend export -E userRoot The server stores the export in an LDIF file in the /var/lib/dirsrv/ slapd-instance_name /ldif/ directory. The -E option decrypts attributes that are already encrypted during the export. In the web console, navigate to Database Suffixes suffix_entry Encrypted Attributes . Enter the attribute to encrypt, and click Add Attribute . In the Actions menu, select Stop Instance . On the command line, import the LDIF file: # dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif The --encrypted parameter enables the script to encrypt attributes configured for encryption during the import. In the web console, open the Actions menu, and select Start Instance . Additional resources Enabling TLS-encrypted connections to Directory Server 7.4. General considerations after enabling attribute encryption Consider the following points after you have enabled encryption for data that is already in the database: Unencrypted data can persist in the server's database page pool backing file. To remove this data: Stop the instance: # dsctl instance_name stop Remove the /var/lib/dirsrv/slapd- instance_name /db/guardian file: # **rm /var/lib/dirsrv/slapd- instance_name /db/guardian`` Start the instance: # dsctl instance_name start After you enabled have encryption and successfully imported the data, delete the LDIF file with the unencrypted data. Directory Server does not encrypt the replication log file. To protect this data, store the replication log on an encrypted disk. Data in the server's memory (RAM) is unencrypted and can be temporarily stored in swap partitions. To protect this data, configure encrypted swap space. Important Even if you delete files that contain unencrypted data, this data can be restored under certain circumstances. 7.5. Updating the TLS certificates used for attribute encryption Attribute encryption is based on the TLS certificate of the server. Follow this procedure to prevent that attribute encryption fails after renewing or replacing the TLS certificate. Prerequisites You configured attribute encryption. The TLS certificate will expire in the near future. Procedure Export the userRoot database: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend export -E userRoot The server stores the export in an LDIF file in the /var/lib/dirsrv/ slapd-instance_name /ldif/ directory. The -E option decrypts attributes that are already encrypted during the export. Create a private key and a certificate signing request (CSR). Skip this step if you want to create them using an external utility. If your host is reachable only by one name, enter: # dsctl instance_name tls generate-server-cert-csr -s " CN=server.example.com,O=example_organization " If your host is reachable by multiple names: # dsctl instance_name tls generate-server-cert-csr -s " CN=server.example.com,O=example_organization " server.example.com server.example.net If you specify the host names as the last parameter, the command adds the Subject Alternative Name (SAN) extension with the DNS:server.example.com, DNS:server.example.net entries to the CSR. The string specified in the -s subject parameter must be a valid subject name according to RFC 1485. The CN field in the subject is required, and you must set it to one of the fully-qualified domain names (FQDN) of the server. The command stores the CSR in the /etc/dirsrv/slapd- instance_name /Server-Cert.csr file. Submit the CSR to the certificate authority (CA) to get a certificate issued. For further details, see your CA's documentation. Import the server certificate issued by the CA to the NSS database: If you created the private key using the dsctl tls generate-server-cert-csr command, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com security certificate add --file /root/instance_name.crt --name " server-cert " --primary-cert Remember the name of the certificate you set in the --name _certificate_nickname parameter. You require it in a later step. If you created the private key using an external utility, import the server certificate and the private key: # dsctl instance_name tls import-server-key-cert /root/server.crt /root/server.key Note that the command requires you to specify the path to the server certificate first and then the path to the private key. This method always sets the nickname of the certificate to Server-Cert . Import the CA certificate to the NSS database: # dsconf -D " cn=Directory Manager " ldap://server.example.com security ca-certificate add --file /root/ca.crt --name " Example CA " Set the trust flags of the CA certificate: # dsconf -D " cn=Directory Manager " ldap://server.example.com security ca-certificate set-trust-flags " Example CA " --flags " CT,, " This configures Directory Server to trust the CA for TLS encryption and certificate-based authentication. Stop the instance: # dsctl instance_name stop Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file, and remove the following entries including their attributes: cn=AES,cn=encrypted attribute keys,cn= database_name ,cn=ldbm database,cn=plugins,cn=config cn=3DES,cn=encrypted attribute keys,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Important Remove the entries for all databases. If any entry that contains the nsSymmetricKey attribute is left in the `/etc/dirsrv/slapd- instance_name /dse.ldif file, Directory Server will fail to start. Import the LDIF file: # dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif The --encrypted parameter enables the script to encrypt attributes configured for encryption during the import. Start the instance: # dsctl instance_name start | [
"dn: uid=demo_user,ou=People,dc=example,dc=com uid::Sf04P9nJWGU1qiW9JJCGRg==",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend export -E userRoot",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend attr-encrypt --add-attr telephoneNumber dc=example,dc=com",
"dsctl instance_name stop",
"dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif",
"dsctl instance_name start",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend export -E userRoot",
"dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif",
"dsctl instance_name stop",
"**rm /var/lib/dirsrv/slapd- instance_name /db/guardian``",
"dsctl instance_name start",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend export -E userRoot",
"dsctl instance_name tls generate-server-cert-csr -s \" CN=server.example.com,O=example_organization \"",
"dsctl instance_name tls generate-server-cert-csr -s \" CN=server.example.com,O=example_organization \" server.example.com server.example.net",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com security certificate add --file /root/instance_name.crt --name \" server-cert \" --primary-cert",
"dsctl instance_name tls import-server-key-cert /root/server.crt /root/server.key",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com security ca-certificate add --file /root/ca.crt --name \" Example CA \"",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com security ca-certificate set-trust-flags \" Example CA \" --flags \" CT,, \"",
"dsctl instance_name stop",
"dsctl instance_name ldif2db --encrypted userRoot /var/lib/dirsrv/ slapd-instance_name /ldif/ None-userroot-2022_01_24_10_28_27.ldif",
"dsctl instance_name start"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/assembly_managing-attribute-encryption_configuring-directory-databases |
Chapter 3. glance | Chapter 3. glance The following chapter contains information about the configuration options in the glance service. 3.1. glance-api.conf This section contains options for the /etc/glance/glance-api.conf file. 3.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-api.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. allow_anonymous_access = False boolean value Allow limited access to unauthenticated users. Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware. Possible values: True False Related options: None api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default backlog = 4096 integer value Set the number of incoming connection requests. Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096. An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic. Possible values: Positive integer Related options: None bind_host = 0.0.0.0 host address value IP address to bind the glance servers to. Provide an IP address to bind the glance server to. The default value is 0.0.0.0 . Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server. Possible values: A valid IPv4 address A valid IPv6 address Related options: None bind_port = None port value Port number on which the server will listen. Provide a valid port number to bind the server's socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191. Possible values: A valid port number (0 to 65535) Related options: None client_socket_timeout = 900 integer value Timeout for client connections' socket operations. Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds. The value zero implies wait forever. Possible values: Zero Positive integer Related options: None conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_publisher_id = image.localhost string value Default publisher_id for outgoing Glance notifications. This is the value that the notification driver will use to identify messages for events originating from the Glance service. Typically, this is the hostname of the instance that generated the message. Possible values: Any reasonable instance identifier, for example: image.host1 Related options: None delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None disabled_notifications = [] list value List of notifications to be disabled. Specify a list of notifications that should not be emitted. A notification can be given either as a notification type to disable a single event notification, or as a notification group prefix to disable all event notifications within a group. Possible values: A comma-separated list of individual notification types or notification groups to be disabled. Currently supported groups: image image.member task metadef_namespace metadef_object metadef_property metadef_resource_type metadef_tag Related options: None enabled_backends = None dict value Key:Value pair of store identifier and store type. In case of multiple backends should be separated using comma. enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods enforce_secure_rbac = False boolean value Enforce API access based on common persona definitions used across OpenStack. Enabling this option formalizes project-specific read/write operations, like creating private images or updating the status of shared image, behind the member role. It also formalizes a read-only variant useful for project-specific API operations, like listing private images in a project, behind the reader role. Operators should take an opportunity to understand glance's new image policies, audit assignments in their deployment, and update permissions using the default roles in keystone (e.g., admin , member , and reader ). Related options: [oslo_policy]/enforce_new_defaults Deprecated since: Wallaby Reason: This option has been introduced to require operators to opt into enforcing authorization based on common RBAC personas, which is EXPERIMENTAL as of the Wallaby release. This behavior will be the default and STABLE in a future release, allowing this option to be removed. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None http_keepalive = True boolean value Set keep alive option for HTTP over TCP. Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header "Connection: close". If set to True , the server returns a "Connection: Keep-Alive" in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request. This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client. Possible values: True False Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max location_strategy = location_order string value Strategy to determine the preference order of image locations. This configuration option indicates the strategy to determine the order in which an image's locations must be accessed to serve the image's data. Glance then retrieves the image data from the first responsive active location it finds in this list. This option takes one of two possible values location_order and store_type . The default value is location_order , which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference . Possible values: location_order store_type Related options: store_type_preference log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_header_line = 16384 integer value Maximum line size of message headers. Provide an integer value representing a length to limit the size of message headers. The default value is 16384. Note max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs. Setting max_header_line to 0 sets no limit for the line size of message headers. Possible values: 0 Positive integer Related options: None max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_request_id_length = 64 integer value Limit the request ID length. Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs. Possible values: Integer value between 0 and 16384 Related options: None metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir property_protection_file = None string value The location of the property protection file. Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them. A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won't be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html Possible values: Empty string Valid path to the property protection configuration file Related options: property_protection_rule_format property_protection_rule_format = roles string value Rule format for property protection. Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies . The default value is roles . If the value is roles , the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies , a policy defined in policy.yaml is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: https://docs.openstack.org/glance/latest/admin/property-protections.html#examples Possible values: roles policies Related options: property_protection_file public_endpoint = None string value Public url endpoint to use for Glance versions response. This is the public url endpoint that will appear in the Glance "versions" response. If no value is specified, the endpoint that is displayed in the version's response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer's URL for this value. Possible values: None Proxy URL Load balancer URL Related options: None publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete secure_proxy_ssl_header = None string value The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is "HTTP_X_FORWARDED_PROTO". show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Set the wait time before a connection recheck. Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds. Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment. Possible values: Positive integer value representing time in seconds Related options: None transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint workers = None integer value Number of Glance worker processes to start. Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers limited to 8. For example if the processor count is 6, 6 workers will be used, if the processor count is 24 only 8 workers will be used. The limit will only apply to the default value, if 24 workers is configured, 24 is used. Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000. Note Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000. Possible values: 0 Positive integer value (typically equal to the number of CPUs) Related options: None 3.1.2. cinder The following table outlines the options available under the [cinder] group in the /etc/glance/glance-api.conf file. Table 3.1. cinder Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify mutipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None 3.1.3. cors The following table outlines the options available under the [cors] group in the /etc/glance/glance-api.conf file. Table 3.2. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['Content-MD5', 'X-Image-Meta-Checksum', 'X-Storage-Token', 'Accept-Encoding', 'X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Image-Meta-Checksum', 'X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 3.1.4. database The following table outlines the options available under the [database] group in the /etc/glance/glance-api.conf file. Table 3.3. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.1.5. file The following table outlines the options available under the [file] group in the /etc/glance/glance-api.conf file. Table 3.4. file Configuration option = Default value Type Description filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.6. glance.store.http.store The following table outlines the options available under the [glance.store.http.store] group in the /etc/glance/glance-api.conf file. Table 3.5. glance.store.http.store Configuration option = Default value Type Description http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file 3.1.7. glance.store.rbd.store The following table outlines the options available under the [glance.store.rbd.store] group in the /etc/glance/glance-api.conf file. Table 3.6. glance.store.rbd.store Configuration option = Default value Type Description rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None 3.1.8. glance.store.s3.store The following table outlines the options available under the [glance.store.s3.store] group in the /etc/glance/glance-api.conf file. Table 3.7. glance.store.s3.store Configuration option = Default value Type Description s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size 3.1.9. glance.store.swift.store The following table outlines the options available under the [glance.store.swift.store] group in the /etc/glance/glance-api.conf file. Table 3.8. glance.store.swift.store Configuration option = Default value Type Description default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size 3.1.10. glance.store.vmware_datastore.store The following table outlines the options available under the [glance.store.vmware_datastore.store] group in the /etc/glance/glance-api.conf file. Table 3.9. glance.store.vmware_datastore.store Configuration option = Default value Type Description vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.11. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-api.conf file. Table 3.10. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify mutipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_backend = None string value The store identifier for the default backend in which data will be stored. The value must be defined as one of the keys in the dict defined by the enabled_backends configuration option in the DEFAULT configuration group. If a value is not defined for this option: the consuming service may refuse to start store_add calls that do not specify a specific backend will raise a glance_store.exceptions.UnknownScheme exception Related Options: enabled_backends default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.1.12. image_format The following table outlines the options available under the [image_format] group in the /etc/glance/glance-api.conf file. Table 3.11. image_format Configuration option = Default value Type Description container_formats = ['ami', 'ari', 'aki', 'bare', 'ovf', 'ova', 'docker', 'compressed'] list value Supported values for the container_format image attribute disk_formats = ['ami', 'ari', 'aki', 'vhd', 'vhdx', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso', 'ploop'] list value Supported values for the disk_format image attribute vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing allowed VMDK create-type subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no VDMK image types allowed. Note that this is currently only checked during image conversion (if enabled), and limits the types of VMDK images we will convert from. 3.1.13. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/glance/glance-api.conf file. Table 3.12. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 3.1.14. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-api.conf file. Table 3.13. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.1.15. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/glance/glance-api.conf file. Table 3.14. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 3.1.16. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/glance/glance-api.conf file. Table 3.15. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 3.1.17. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/glance/glance-api.conf file. Table 3.16. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 3.1.18. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/glance/glance-api.conf file. Table 3.17. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = True boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 3.1.19. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/glance/glance-api.conf file. Table 3.18. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 3.1.20. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-api.conf file. Table 3.19. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.1.21. paste_deploy The following table outlines the options available under the [paste_deploy] group in the /etc/glance/glance-api.conf file. Table 3.20. paste_deploy Configuration option = Default value Type Description config_file = None string value Name of the paste configuration file. Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments. NOTES: Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors. If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service's configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start. Possible values: A string value representing the name of the paste configuration file. Related Options: flavor flavor = None string value Deployment flavor to use in the server application pipeline. Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed. For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone . Possible values: String value representing a partial pipeline name. Related Options: config_file 3.1.22. profiler The following table outlines the options available under the [profiler] group in the /etc/glance/glance-api.conf file. Table 3.21. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 3.1.23. store_type_location_strategy The following table outlines the options available under the [store_type_location_strategy] group in the /etc/glance/glance-api.conf file. Table 3.22. store_type_location_strategy Configuration option = Default value Type Description store_type_preference = [] list value Preference order of storage backends. Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option. Note The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order. Possible values: Empty list Comma separated list of registered store names. Legal values are: file http rbd swift cinder vmware Related options: location_strategy stores 3.1.24. task The following table outlines the options available under the [task] group in the /etc/glance/glance-api.conf file. Table 3.23. task Configuration option = Default value Type Description task_executor = taskflow string value Task executor to be used to run task scripts. Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used. TaskFlow helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner. Possible values: taskflow Related Options: None task_time_to_live = 48 integer value Time in hours for which a task lives after, either succeeding or failing work_dir = None string value Absolute path to the work directory to use for asynchronous task operations. The directory set here will be used to operate over images - normally before they are imported in the destination store. Note When providing a value for work_dir , please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space. A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong. Possible values: String value representing the absolute path to the working directory Related Options: None 3.1.25. taskflow_executor The following table outlines the options available under the [taskflow_executor] group in the /etc/glance/glance-api.conf file. Table 3.24. taskflow_executor Configuration option = Default value Type Description conversion_format = None string value Set the desired image conversion format. Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure. By default, conversion_format is not set and must be set explicitly in the configuration file. The allowed values for this option are raw , qcow2 and vmdk . The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation. Possible values: qcow2 raw vmdk Related options: disk_formats engine_mode = parallel string value Set the taskflow engine mode. Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel . When set to serial , the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks. Possible values: serial parallel Related options: max_workers max_workers = 10 integer value Set the number of engine executable tasks. Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel. Possible values: Integer value greater than or equal to 1 Related options: engine_mode 3.1.26. wsgi The following table outlines the options available under the [wsgi] group in the /etc/glance/glance-api.conf file. Table 3.25. wsgi Configuration option = Default value Type Description python_interpreter = /usr/bin/python3 string value Path to the python interpreter to use when spawning external processes. By default this is sys.executable, which should be the same interpreter running Glance itself. However, in some situations (i.e. uwsgi) this may not actually point to a python interpreter itself. task_pool_threads = 16 integer value The number of threads (per worker process) in the pool for processing asynchronous tasks. This controls how many asynchronous tasks (i.e. for image interoperable import) each worker can run at a time. If this is too large, you may have increased memory footprint per worker and/or you may overwhelm other system resources such as disk or outbound network bandwidth. If this is too small, image import requests will have to wait until a thread becomes available to begin processing. 3.2. glance-scrubber.conf This section contains options for the /etc/glance/glance-scrubber.conf file. 3.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-scrubber.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default daemon = False boolean value Run scrubber as a daemon. This boolean configuration option indicates whether scrubber should run as a long-running process that wakes up at regular intervals to scrub images. The wake up interval can be specified using the configuration option wakeup_time . If this configuration option is set to False , which is the default value, scrubber runs once to scrub images and exits. In this case, if the operator wishes to implement continuous scrubbing of images, scrubber needs to be scheduled as a cron job. Possible values: True False Related options: wakeup_time debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. delayed_delete = False boolean value Turn on/off delayed delete. Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data. Note When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage. Possible values: True False Related options: scrub_time wakeup_time scrub_pool_size digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods enforce_secure_rbac = False boolean value Enforce API access based on common persona definitions used across OpenStack. Enabling this option formalizes project-specific read/write operations, like creating private images or updating the status of shared image, behind the member role. It also formalizes a read-only variant useful for project-specific API operations, like listing private images in a project, behind the reader role. Operators should take an opportunity to understand glance's new image policies, audit assignments in their deployment, and update permissions using the default roles in keystone (e.g., admin , member , and reader ). Related options: [oslo_policy]/enforce_new_defaults Deprecated since: Wallaby Reason: This option has been introduced to require operators to opt into enforcing authorization based on common RBAC personas, which is EXPERIMENTAL as of the Wallaby release. This behavior will be the default and STABLE in a future release, allowing this option to be removed. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. restore = None string value Restore the image status from pending_delete to active . This option is used by administrator to reset the image's status from pending_delete to active when the image is deleted by mistake and pending delete feature is enabled in Glance. Please make sure the glance-scrubber daemon is stopped before restoring the image to avoid image data inconsistency. Possible values: image's uuid scrub_pool_size = 1 integer value The size of thread pool to be used for scrubbing images. When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing. Possible values: Any non-zero positive integer Related options: delayed_delete scrub_time = 0 integer value The amount of time, in seconds, to delay image scrubbing. When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing. It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images. Possible values: Any non-negative integer Related options: delayed_delete show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None wakeup_time = 300 integer value Time interval, in seconds, between scrubber runs in daemon mode. Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration. If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed. Possible values: Any non-negative integer Related options: daemon delayed_delete watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint 3.2.2. database The following table outlines the options available under the [database] group in the /etc/glance/glance-scrubber.conf file. Table 3.26. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 3.2.3. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-scrubber.conf file. Table 3.27. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify mutipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.2.4. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/glance/glance-scrubber.conf file. Table 3.28. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 3.2.5. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-scrubber.conf file. Table 3.29. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 3.3. glance-cache.conf This section contains options for the /etc/glance/glance-cache.conf file. 3.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/glance/glance-cache.conf file. . Configuration option = Default value Type Description allow_additional_image_properties = True boolean value Allow users to add additional/custom properties to images. Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties . By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option. Possible values: True False Related options: image_property_quota Deprecated since: Ussuri Reason: This option is redundant. Control custom image property usage via the image_property_quota configuration option. This option is scheduled to be removed during the Victoria development cycle. api_limit_max = 1000 integer value Maximum number of results that could be returned by a request. As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can't be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here. Note Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: limit_param_default debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. digest_algorithm = sha256 string value Digest algorithm to use for digital signature. Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used. To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are sha1 , sha256 , and sha512 . Note digest_algorithm is not related to Glance's image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation. Possible values: An OpenSSL message digest algorithm identifier Relation options: None enabled_import_methods = ['glance-direct', 'web-download', 'copy-image'] list value List of enabled Image Import Methods enforce_secure_rbac = False boolean value Enforce API access based on common persona definitions used across OpenStack. Enabling this option formalizes project-specific read/write operations, like creating private images or updating the status of shared image, behind the member role. It also formalizes a read-only variant useful for project-specific API operations, like listing private images in a project, behind the reader role. Operators should take an opportunity to understand glance's new image policies, audit assignments in their deployment, and update permissions using the default roles in keystone (e.g., admin , member , and reader ). Related options: [oslo_policy]/enforce_new_defaults Deprecated since: Wallaby Reason: This option has been introduced to require operators to opt into enforcing authorization based on common RBAC personas, which is EXPERIMENTAL as of the Wallaby release. This behavior will be the default and STABLE in a future release, allowing this option to be removed. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. hashing_algorithm = sha512 string value Secure hashing algorithm used for computing the os_hash_value property. This option configures the Glance "multihash", which consists of two image properties: the os_hash_algo and the os_hash_value . The os_hash_algo will be populated by the value of this configuration option, and the os_hash_value will be populated by the hexdigest computed when the algorithm is applied to the uploaded or imported image data. The value must be a valid secure hash algorithm name recognized by the python hashlib library. You can determine what these are by examining the hashlib.algorithms_available data member of the version of the library being used in your Glance installation. For interoperability purposes, however, we recommend that you use the set of secure hash names supplied by the hashlib.algorithms_guaranteed data member because those algorithms are guaranteed to be supported by the hashlib library on all platforms. Thus, any image consumer using hashlib locally should be able to verify the os_hash_value of the image. The default value of sha512 is a performant secure hash algorithm. If this option is misconfigured, any attempts to store image data will fail. For that reason, we recommend using the default value. Possible values: Any secure hash algorithm name recognized by the Python hashlib library Related options: None image_cache_dir = None string value Base directory for image cache. This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue . The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory. The queue`subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the `queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it'll be retried the time cache-prefetcher runs. Possible values: A valid path Related options: image_cache_sqlite_db image_cache_driver = sqlite string value The driver to use for image cache management. This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc. The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed. Possible values: sqlite xattr Related options: None image_cache_max_size = 10737418240 integer value The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache. Note This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit. Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here. Possible values: Any non-negative integer Related options: None image_cache_sqlite_db = cache.db string value The relative path to sqlite file database that will be used for image cache management. This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir . This is a lightweight database with just one table. Possible values: A valid relative path to sqlite file database Related options: image_cache_dir image_cache_stall_time = 86400 integer value The amount of time, in seconds, an incomplete image remains in the cache. Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it'll be removed by cache-cleaner on its run. It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space. Possible values: Any non-negative integer Related options: None image_location_quota = 10 integer value Maximum number of locations allowed on an image. Any negative value is interpreted as unlimited. Related options: None image_member_quota = 128 integer value Maximum number of image members per image. This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited. Related options: None image_property_quota = 128 integer value Maximum number of properties allowed on an image. This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited. Note This won't have any impact if additional properties are disabled. Please refer to allow_additional_image_properties . Related options: allow_additional_image_properties image_size_cap = 1099511627776 integer value Maximum size of image a user can upload in bytes. An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB). NOTES: This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available. Possible values: Any positive number less than or equal to 9223372036854775808 image_tag_quota = 128 integer value Maximum number of tags allowed on an image. Any negative value is interpreted as unlimited. Related options: None `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. limit_param_default = 25 integer value The default number of results to return for a request. Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request. NOTES: The value of this configuration option may not be greater than the value specified by api_limit_max . Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience. Possible values: Any positive integer Related options: api_limit_max log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_encryption_key = None string value AES key for encrypting store location metadata. Provide a string value representing the AES cipher to use for encrypting Glance store metadata. Note The AES key to use must be set to a random string of length 16, 24 or 32 bytes. Possible values: String value representing a valid AES key Related options: None node_staging_uri = file:///tmp/staging/ string value The URL provides location where the temporary data will be stored This option is for Glance internal use only. Glance will save the image data uploaded by the user to staging endpoint during the image import process. This option does not change the staging API endpoint by any means. Note It is discouraged to use same path as [task]/work_dir Note file://<absolute-directory-path> is the only option api_image_import flow will support for now. Note The staging path must be on shared filesystem available to all Glance API nodes. Possible values: String starting with file:// followed by absolute FS path Related options: [task]/work_dir publish_errors = False boolean value Enables or disables publication of error events. pydev_worker_debug_host = None host address value Host address of the pydev server. Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance. Possible values: Valid hostname Valid IP address Related options: None pydev_worker_debug_port = 5678 port value Port number that the pydev server will listen on. Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance. Possible values: A valid port number Related options: None rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. show_image_direct_url = False boolean value Show direct image location when returning an image. This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url . When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False . Possible values: True False Related options: show_multiple_locations location_strategy show_multiple_locations = False boolean value Show all image locations when returning an image. This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations . NOTES: Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more information. If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False . Possible values: True False Related options: show_image_direct_url location_strategy Deprecated since: Newton *Reason:*Use of this option, deprecated since Newton, is a security risk and will be removed once we figure out a way to satisfy those use cases that currently require it. An earlier announcement that the same functionality can be achieved with greater granularity by using policies is incorrect. You cannot work around this option via policy configuration at the present time, though that is the direction we believe the fix will take. Please keep an eye on the Glance release notes to stay up to date on progress in addressing this issue. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. user_storage_quota = 0 string value Maximum amount of image storage per tenant. This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit. The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors. Possible values: A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above. Related options: None watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. worker_self_reference_url = None string value The URL to this worker. If this is set, other glance workers will know how to contact this one directly if needed. For image import, a single worker stages the image and other workers need to be able to proxy the import request to the right one. If unset, this will be considered to be public_endpoint , which normally would be set to the same value on all workers, effectively disabling the proxying behavior. Possible values: A URL by which this worker is reachable from other workers Related options: public_endpoint 3.3.2. glance_store The following table outlines the options available under the [glance_store] group in the /etc/glance/glance-cache.conf file. Table 3.30. glance_store Configuration option = Default value Type Description cinder_api_insecure = False boolean value Allow to perform insecure SSL requests to cinder. If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option. Possible values: True False Related options: cinder_ca_certificates_file cinder_ca_certificates_file = None string value Location of a CA certificates file used for cinder client requests. The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification. Possible values: Path to a ca certificates file Related options: cinder_api_insecure cinder_catalog_info = volumev3::publicURL string value Information to match when looking for cinder in the service catalog. When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint. The service catalog can be listed by the openstack catalog list command. Possible values: A string of of the following form: <service_type>:<service_name>:<interface> At least service_type and interface should be specified. service_name can be omitted. Related options: cinder_os_region_name cinder_endpoint_template cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_endpoint_template = None string value Override service catalog lookup with template for cinder endpoint. When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified. If this configuration option is set, cinder_catalog_info will be ignored. Possible values: URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_password cinder_catalog_info cinder_enforce_multipath = False boolean value If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. Possible values: True or False Related options: cinder_use_multipath cinder_http_retries = 3 integer value Number of cinderclient retries on failed http calls. When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds. Possible values: A positive integer Related options: None cinder_mount_point_base = /var/lib/glance/mnt string value Directory where the NFS volume is mounted on the glance node. Possible values: A string representing absolute path of mount point. cinder_os_region_name = None string value Region name to lookup cinder service from the service catalog. This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region. Possible values: A string that is a valid region name. Related options: cinder_catalog_info cinder_state_transition_timeout = 300 integer value Time period, in seconds, to wait for a cinder volume transition to complete. When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume's state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error ), the image creation fails. Possible values: A positive integer Related options: None cinder_store_auth_address = None string value The address where the cinder authentication service is listening. When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance's ACL. If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context's user and project are used. Possible values: A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 Related options: cinder_store_user_name cinder_store_password cinder_store_project_name cinder_store_password = None string value Password for the user authenticating against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid password for the user specified by cinder_store_user_name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_project_name cinder_store_project_name = None string value Project name where the image volume is stored in cinder. If this configuration option is not set, the project in current context is used. This must be used with all the following related options. If any of these are not specified, the project of the current context is used. Possible values: A valid project name Related options: cinder_store_auth_address cinder_store_user_name cinder_store_password cinder_store_user_name = None string value User name to authenticate against cinder. This must be used with all the following related options. If any of these are not specified, the user of the current context is used. Possible values: A valid user name Related options: cinder_store_auth_address cinder_store_password cinder_store_project_name cinder_use_multipath = False boolean value Flag to identify mutipath is supported or not in the deployment. Set it to False if multipath is not supported. Possible values: True or False Related options: cinder_enforce_multipath cinder_volume_type = None string value Volume type that will be used for volume creation in cinder. Some cinder backends can have several volume types to optimize storage usage. Adding this option allows an operator to choose a specific volume type in cinder that can be optimized for images. If this is not set, then the default volume type specified in the cinder configuration will be used for volume creation. Possible values: A valid volume type from cinder Related options: None Note You cannot use an encrypted volume_type associated with an NFS backend. An encrypted volume stored on an NFS backend will raise an exception whenever glance_store tries to write or access image data stored in that volume. Consult your Cinder administrator to determine an appropriate volume_type. default_store = file string value The default scheme to use for storing images. Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store. Note The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option. Possible values: file filesystem http https swift swift+http swift+https swift+config rbd cinder vsphere s3 Related Options: stores Deprecated since: Rocky Reason: This option is deprecated against new config option ``default_backend`` which acts similar to ``default_store`` config option. This option is scheduled for removal in the U development cycle. default_swift_reference = ref1 string value Reference to default Swift account/backing store parameters. Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ref1 . This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added. Possible values: A valid string value Related options: None filesystem_store_chunk_size = 65536 integer value Chunk size, in bytes. The chunk size used when reading or writing image files. Raising this value may improve the throughput but it may also slightly increase the memory usage when handling a large number of requests. Possible Values: Any positive integer value Related options: None filesystem_store_datadir = /var/lib/glance/images string value Directory to which the filesystem backend store writes images. Upon start up, Glance creates the directory if it doesn't already exist and verifies write access to the user under which glance-api runs. If the write access isn't available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images. Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: A valid path to a directory Related options: filesystem_store_datadirs filesystem_store_file_perm filesystem_store_datadirs = None multi valued List of directories and their priorities to which the filesystem backend store writes images. The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero. More information on configuring filesystem store with multiple store directories can be found at https://docs.openstack.org/glance/latest/configuration/configuring.html Note This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images. Possible values: List of strings of the following form: <a valid directory path>:<optional integer priority> Related options: filesystem_store_datadir filesystem_store_file_perm filesystem_store_file_perm = 0 integer value File access permissions for the image files. Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit. For more information, please refer the documentation at https://docs.openstack.org/glance/latest/configuration/configuring.html Possible values: A valid file access permission Zero Any negative integer Related options: None filesystem_store_metadata_file = None string value Filesystem store metadata file. The path to a file which contains the metadata to be returned with any location associated with the filesystem store. Once this option is set, it is used for new images created afterward only - previously existing images are not affected. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string. Possible values: A valid path to the store metadata file Related options: None filesystem_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the filesystem, the holes who can appear will automatically be interpreted by the filesystem as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None http_proxy_information = {} dict value The http/https proxy information to be used to connect to the remote server. This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080. Possible values: A comma separated list of scheme:proxy pairs as described above Related options: None https_ca_certificates_file = None string value Path to the CA bundle file. This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server. Possible values: A valid path to a CA file Related options: https_insecure https_insecure = True boolean value Set verification of the remote server certificate. This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification. This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option. Possible values: True False Related options: https_ca_certificates_file rados_connect_timeout = 0 integer value Timeout value for connecting to Ceph cluster. This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used. Possible Values: Any integer value Related options: None `rbd_store_ceph_conf = ` string value Ceph configuration file path. This configuration option specifies the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to the empty string, librados will read the standard ceph.conf file by searching the default Ceph configuration file locations in sequential order. See the Ceph documentation for details. Note If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section NOTE 2: If you leave this option empty (the default), the actual Ceph configuration file used may change depending on what version of librados is being used. If it is important for you to know exactly which configuration file is in effect, you may specify that file here using this option. Possible Values: A valid path to a configuration file Related options: rbd_store_user rbd_store_chunk_size = 8 integer value Size, in megabytes, to chunk RADOS images into. Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two. When Ceph's RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance. Possible Values: Any positive integer value Related options: None rbd_store_pool = images string value RADOS pool in which images are stored. When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is images . More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/ Possible Values: A valid pool name Related options: None rbd_store_user = None string value RADOS user to authenticate as. This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf. Possible Values: A valid RADOS user Related options: rbd_store_ceph_conf rbd_thin_provisioning = False boolean value Enable or not thin provisioning in this backend. This configuration option enable the feature of not really write null byte sequences on the RBD backend, the holes who can appear will automatically be interpreted by Ceph as null bytes, and do not really consume your storage. Enabling this feature will also speed up image upload and save network trafic in addition to save space in the backend, as null bytes sequences are not sent over the network. Possible Values: True False Related options: None rootwrap_config = /etc/glance/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library. Possible values: Path to the rootwrap config file Related options: None s3_store_access_key = None string value The S3 query token access key. This configuration option takes the access key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is the access key for a user with appropriate privileges Related Options: s3_store_host s3_store_secret_key s3_store_bucket = None string value The S3 bucket to be used to store the Glance data. This configuration option specifies where the glance images will be stored in the S3. If s3_store_create_bucket_on_put is set to true, it will be created automatically even if the bucket does not exist. Possible values: Any string value Related Options: s3_store_create_bucket_on_put s3_store_bucket_url_format s3_store_bucket_url_format = auto string value The S3 calling format used to determine the object. This configuration option takes access model that is used to specify the address of an object in an S3 bucket. NOTE: In path -style, the endpoint for the object looks like https://s3.amazonaws.com/bucket/example.img . And in virtual -style, the endpoint for the object looks like https://bucket.s3.amazonaws.com/example.img . If you do not follow the DNS naming convention in the bucket name, you can get objects in the path style, but not in the virtual style. Possible values: Any string value of auto , virtual , or path Related Options: s3_store_bucket s3_store_create_bucket_on_put = False boolean value Determine whether S3 should create a new bucket. This configuration option takes boolean value to indicate whether Glance should create a new bucket to S3 if it does not exist. Possible values: Any Boolean value Related Options: None s3_store_host = None string value The host where the S3 server is listening. This configuration option sets the host of the S3 or S3 compatible storage Server. This option is required when using the S3 storage backend. The host can contain a DNS name (e.g. s3.amazonaws.com, my-object-storage.com) or an IP address (127.0.0.1). Possible values: A valid DNS name A valid IPv4 address Related Options: s3_store_access_key s3_store_secret_key s3_store_large_object_chunk_size = 10 integer value What multipart upload part size, in MB, should S3 use when uploading parts. This configuration option takes the image split size in MB for Multipart Upload. Note: You can only split up to 10,000 images. Possible values: Any positive integer value (must be greater than or equal to 5M) Related Options: s3_store_large_object_size s3_store_thread_pools s3_store_large_object_size = 100 integer value What size, in MB, should S3 start chunking image files and do a multipart upload in S3. This configuration option takes a threshold in MB to determine whether to upload the image to S3 as is or to split it (Multipart Upload). Note: You can only split up to 10,000 images. Possible values: Any positive integer value Related Options: s3_store_large_object_chunk_size s3_store_thread_pools s3_store_secret_key = None string value The S3 query token secret key. This configuration option takes the secret key for authenticating with the Amazon S3 or S3 compatible storage server. This option is required when using the S3 storage backend. Possible values: Any string value that is a secret key corresponding to the access key specified using the s3_store_host option Related Options: s3_store_host s3_store_access_key s3_store_thread_pools = 10 integer value The number of thread pools to perform a multipart upload in S3. This configuration option takes the number of thread pools when performing a Multipart Upload. Possible values: Any positive integer value Related Options: s3_store_large_object_size s3_store_large_object_chunk_size stores = ['file', 'http'] list value List of enabled Glance stores. Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http . Possible values: A comma separated list that could include: file http swift rbd cinder vmware s3 Related Options: default_store Deprecated since: Rocky Reason: This option is deprecated against new config option ``enabled_backends`` which helps to configure multiple backend stores of different schemes. This option is scheduled for removal in the U development cycle. swift_buffer_on_upload = False boolean value Buffer image segments before upload to Swift. Provide a boolean value to indicate whether or not Glance should buffer image data to disk while uploading to swift. This enables Glance to resume uploads on error. NOTES: When enabling this option, one should take great care as this increases disk usage on the API node. Be aware that depending upon how the file system is configured, the disk space used for buffering may decrease the actual disk space available for the glance image cache. Disk utilization will cap according to the following equation: ( swift_store_large_object_chunk_size * workers * 1000) Possible values: True False Related options: swift_upload_buffer_dir swift_store_admin_tenants = [] list value List of tenants that will be granted admin access. This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list. Possible values: A comma separated list of strings representing UUIDs of Keystone projects/tenants Related options: None swift_store_auth_address = None string value The address where the Swift authentication service is listening. swift_store_auth_insecure = False boolean value Set verification of the server certificate. This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won't check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification. Possible values: True False Related options: swift_store_cacert swift_store_auth_version = 2 string value Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. swift_store_cacert = None string value Path to the CA bundle file. This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift. Possible values: A valid path to a CA file Related options: swift_store_auth_insecure swift_store_config_file = None string value Absolute path to the file containing the swift account(s) configurations. Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database. Note Please do not configure this option if you have set swift_store_multi_tenant to True . Possible values: String value representing an absolute path on the glance-api node Related options: swift_store_multi_tenant swift_store_container = glance string value Name of single container to store images/name prefix for multiple containers When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed . When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ). Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba. Possible values: If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of swift_store_multiple_containers_seed should be taken into account as well. Related options: swift_store_multiple_containers_seed swift_store_multi_tenant swift_store_create_container_on_put swift_store_create_container_on_put = False boolean value Create container, if it doesn't already exist, when uploading image. At the time of uploading an image, if the corresponding container doesn't exist, it will be created provided this configuration option is set to True. By default, it won't be created. This behavior is applicable for both single and multiple containers mode. Possible values: True False Related options: None swift_store_endpoint = None string value The URL endpoint to use for Swift backend storage. Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage. Note The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL. Possible values: String value representing a valid URL path up to a Swift container Related Options: None swift_store_endpoint_type = publicURL string value Endpoint Type of Swift service. This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1. Possible values: publicURL adminURL internalURL Related options: swift_store_endpoint swift_store_expire_soon_interval = 60 integer value Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire. Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly. Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration. Possible values: Zero Positive integer value Related Options: None swift_store_key = None string value Auth key for the user authenticating against the Swift authentication service. swift_store_large_object_chunk_size = 200 integer value The maximum size, in MB, of the segments when image data is segmented. When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail. For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB. Possible values: A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration. Related options: swift_store_large_object_size swift_store_large_object_size = 5120 integer value The size threshold, in MB, after which Glance will start segmenting image data. Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to https://docs.openstack.org/swift/latest/overview_large_objects.html This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects. Note This should be set by taking into account the large object limit enforced by the Swift cluster in consideration. Possible values: A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration. Related options: swift_store_large_object_chunk_size swift_store_multi_tenant = False boolean value Store images in tenant's Swift account. This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage Note If using multi-tenant swift store, please make sure that you do not set a swift configuration file with the swift_store_config_file option. Possible values: True False Related options: swift_store_config_file swift_store_multiple_containers_seed = 0 integer value Seed indicating the number of containers to use for storing images. When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images. Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html Note This is used only when swift_store_multi_tenant is disabled. Possible values: A non-negative integer less than or equal to 32 Related options: swift_store_container swift_store_multi_tenant swift_store_create_container_on_put swift_store_region = None string value The region of Swift endpoint to use by Glance. Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set. When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity. This option can be configured for both single-tenant and multi-tenant storage. Note Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions. Possible values: A string value representing a valid Swift region. Related Options: None swift_store_retry_get_count = 0 integer value The number of times a Swift download will be retried before the request fails. Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message. Possible values: Zero Positive integer value Related Options: None swift_store_service_type = object-store string value Type of Swift service to use. Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store . Note If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified. Possible values: A string representing a valid service type for Swift storage. Related Options: None swift_store_ssl_compression = True boolean value SSL layer compression for HTTPS Swift requests. Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled. When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2. Possible values: True False Related Options: None swift_store_use_trusts = True boolean value Use trusts for multi-tenant Swift store. This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data. By default, swift_store_use_trusts is set to True (use of trusts is enabled). If set to False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation. Note This option is considered only when swift_store_multi_tenant is set to True Possible values: True False Related options: swift_store_multi_tenant swift_store_user = None string value The user to authenticate against the Swift authentication service. swift_upload_buffer_dir = None string value Directory to buffer image segments before upload to Swift. Provide a string value representing the absolute path to the directory on the glance node where image segments will be buffered briefly before they are uploaded to swift. NOTES: This is required only when the configuration option swift_buffer_on_upload is set to True. This directory should be provisioned keeping in mind the swift_store_large_object_chunk_size and the maximum number of images that could be uploaded simultaneously by a given glance node. Possible values: String value representing an absolute directory path Related options: swift_buffer_on_upload swift_store_large_object_chunk_size vmware_api_retry_count = 10 integer value The number of VMware API retries. This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify retry forever . Possible Values: Any positive integer value Related options: None vmware_ca_file = None string value Absolute path to the CA bundle file. This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate. If this option is set, the "vmware_insecure" option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server. Possible Values: Any string that is a valid absolute path to a CA file Related options: vmware_insecure vmware_datastores = None multi valued The datastores where the image can be stored. This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ":". An optional weight may be given after the datastore name, separated again by ":" to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>. When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected. Possible Values: Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight> Related options: * None vmware_insecure = False boolean value Set verification of the ESX/vCenter server certificate. This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification. This option is ignored if the "vmware_ca_file" option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the "vmware_ca_file" option . Possible Values: True False Related options: vmware_ca_file vmware_server_host = None host address value Address of the ESX/ESXi or vCenter Server target system. This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com). Possible Values: A valid IPv4 or IPv6 address A valid DNS name Related options: vmware_server_username vmware_server_password vmware_server_password = None string value Server password. This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is a password corresponding to the username specified using the "vmware_server_username" option Related options: vmware_server_host vmware_server_username vmware_server_username = None string value Server username. This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend. Possible Values: Any string that is the username for a user with appropriate privileges Related options: vmware_server_host vmware_server_password vmware_store_image_dir = /openstack_glance string value The directory where the glance images will be stored in the datastore. This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance. Possible Values: Any string that is a valid path to a directory Related options: None vmware_task_poll_interval = 5 integer value Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server. This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call. Possible Values: Any positive integer value Related options: None 3.3.3. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/glance/glance-cache.conf file. Table 3.31. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check | [
"For a complete listing and description of each event refer to: http://docs.openstack.org/developer/glance/notifications.html",
"The values must be specified as: <group_name>.<event_name> For example: image.create,task.success,metadef_tag",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri",
"'glance-direct', 'copy-image' and 'web-download' are enabled by default.",
"Related options: ** [DEFAULT]/node_staging_uri"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuration_reference/glance |
20.34. Listing Volume Information | 20.34. Listing Volume Information The virsh vol-info vol command lists basic information about the given storage volume. You must supply either the storage volume name, key, or path. The command also accepts the option --pool , where you can specify the storage pool that is associated with the storage volume. You can either supply the pool name, or the UUID. Example 20.94. How to view information about a storage volume The following example retrieves information about the storage volume named vol-new . When you run this command you should change the name of the storage volume to the name of your storage volume: The virsh vol-list pool command lists all of volumes that are associated to a given storage pool. This command requires a name or UUID of the storage pool. The --details option instructs virsh to additionally display volume type and capacity related information where available. Example 20.95. How to display the storage pools that are associated with a storage volume The following example lists all storage volumes that are associated with the storage pool vdisk : | [
"virsh vol-info vol-new",
"virsh vol-list vdisk"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-storage_volume_commands-listing_volume_information |
Chapter 6. Unmanaged Clair configuration | Chapter 6. Unmanaged Clair configuration Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster. 6.1. Running a custom Clair configuration with an unmanaged Clair database Use the following procedure to set your Clair database to unmanaged. Important You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false 6.2. Configuring a custom Clair database with an unmanaged Clair database Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Note The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TLS certifications, see "Configuring a custom Clair database with a managed Clair configuration". Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output | [
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/unmanaged-clair-configuration |
Chapter 3. Running OpenShift Service Mesh 2.6 in the same cluster as OpenShift Service Mesh 3 | Chapter 3. Running OpenShift Service Mesh 2.6 in the same cluster as OpenShift Service Mesh 3 If you are moving from Red Hat OpenShift Service Mesh v2.6, you can run OpenShift Service Mesh v2.6 side-by-side with OpenShift Service Mesh v3.0, in one cluster, without them interfering with each other. 3.1. Running OpenShift Service Mesh 2.6 and OpenShift Service Mesh 3 using multi-tenant deployment model If you are moving from Red Hat OpenShift Service Mesh 2.6 from the default multi-tenant deployment model, you can run OpenShift Service Mesh 2.6 side-by-side with OpenShift Service Mesh 3.0, in one cluster, without them interfering with each other. In OpenShift Service Mesh 2.6, you can check your deployment model from the ServiceMeshControlPlane under spec.mode : Example ServiceMeshControlPlane yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: MultiTenant Prerequisites You are running OpenShift Container Platform 4.14 or later. You are running OpenShift Service Mesh 2.6. Important If you are not running OpenShift Service Mesh 2.6, you must upgrade to 2.6 before following this procedure. To upgrade to OpenShift Service Mesh version to 2.6, see: Upgrading Service Mesh 2.x Procedure Install the OpenShift Service Mesh 3 Operator. Create an IstioCNI resource in the istio-cni namespace. Create an Istio resource in a different namespace than the namespace used in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6. This example uses the istio-system3 namespace: Example Istio resource with istio-system3 kind: Istio piVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0 1 Do not use default as the name. 2 Must be different from the namespace used in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6. This example uses the istio-system3 namespace. 3 To ignore OpenShift Service Mesh 2.6 namespaces, configure the discoverySelectors section as shown. All other namespaces will be part of the OpenShift Service Mesh 3.0 mesh. Deploy your workloads and label the namespaces with istio.io/rev=ossm3 label by running the following command: USD oc label namespace <namespace-name> istio.io/rev=<revision-name> Note If you have changed spec.memberSelectors in ServiceMeshMemberRoll in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6., then use the istio-injection=enabled label for your OpenShift Service Mesh 3.0 workload namespaces. Confirm the application workloads are managed by their respective control planes by running the following command: USD istioctl ps -i istio-system Sample output istio-system USD istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 Sample output istio-system3 USD istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 3.2. Running Red Hat OpenShift Service Mesh 2.6 and Red Hat OpenShift Service Mesh 3 using cluster-wide deployment model If you are moving from Red Hat OpenShift Service Mesh 2.6 in a cluster-wide deployment model, you can run OpenShift Service Mesh 2.6 side-by-side with OpenShift Service Mesh 3.0, in one cluster, without them interfering with each other. In OpenShift Service Mesh 2.6, you can check your deployment model from the ServiceMeshControlPlane under spec.mode : Example ServiceMeshControlPlane yaml apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide To prevent conflicts with OpenShift Service Mesh 3.0 when using the OpenShift Service Mesh 2.6 cluster-wide deployment model, you need to configure the ServiceMeshControlPlane resource to restrict namespaces to only those belonging to (SMProduct) 2.6. Prerequisites You are running OpenShift Container Platform 4.14 or later. You are running OpenShift Service Mesh 2.6. Important If you are not running OpenShift Service Mesh 2.6, you must upgrade to 2.6 before following this procedure. To upgrade to OpenShift Service Mesh version to 2.6, see: Upgrading Service Mesh 2.x Procedure Configure discoverySelectors , and set the ENABLE_ENHANCED_RESOURCE_SCOPING environment variable on the pilot container to true in your OpenShift Service Mesh 2.6 ServiceMeshControlPlane custom resource (CR): Example ServiceMeshControlPlane CR apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide meshConfig: discoverySelectors: - matchExpressions: - key: maistra.io/member-of operator: Exists runtime: components: pilot: container: env: ENABLE_ENHANCED_RESOURCE_SCOPING: 'true' Install the OpenShift Service Mesh 3 Operator. Create an IstioCNI resource in the istio-cni namespace. Create an Istio resource in a different namespace than the namespace used in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6. This example uses the istio-system3 namespace: Example Istio resource with istio-system3 kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0 1 Do not use default as the name. 2 Must be different from the namespace used in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6. This example uses the istio-system3 namespace. 3 To ignore OpenShift Service Mesh 2.6 namespaces, configure the discoverySelectors section as shown. All other namespaces will be part of the OpenShift Service Mesh 3.0 mesh. Deploy your workloads and label the namespaces with istio.io/rev=ossm3 label by running the following command: USD oc label namespace <namespace-name> istio.io/rev=ossm3 Note If you have changed spec.memberSelectors in ServiceMeshMemberRoll in the ServiceMeshControlPlane resource in OpenShift Service Mesh 2.6., then use the istio-injection=enabled label for your OpenShift Service Mesh 3.0 workload namespaces. Confirm the application workloads are managed by their respective control planes by running the following command: USD istioctl ps -i istio-system Sample output istio-system USD istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 Sample output istio-system3 USD istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 3.3. Additional resources Installing OpenShift Service Mesh Operator Upgrading Service Mesh 2.x Service Mesh 2.x deployment models Install Multiple Istio Control Planes in a Single Cluster | [
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: MultiTenant",
"kind: Istio piVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0",
"oc label namespace <namespace-name> istio.io/rev=<revision-name>",
"istioctl ps -i istio-system",
"istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8",
"istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: mode: ClusterWide",
"apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: version: v2.6 mode: ClusterWide meshConfig: discoverySelectors: - matchExpressions: - key: maistra.io/member-of operator: Exists runtime: components: pilot: container: env: ENABLE_ENHANCED_RESOURCE_SCOPING: 'true'",
"kind: Istio apiVersion: sailoperator.io/v1alpha1 metadata: name: ossm3 1 spec: namespace: istio-system3 2 values: meshConfig: discoverySelectors: 3 - matchExpressions: - key: maistra.io/member-of operator: DoesNotExist updateStrategy: type: InPlace version: v1.23.0",
"oc label namespace <namespace-name> istio.io/rev=ossm3",
"istioctl ps -i istio-system",
"istioctl ps -i istio-system NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-7f46897b-88x4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mongodb-v1-6cf7dc9885-7nlmq.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 mysqldb-v1-7c4c44b9b4-22b57.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 productpage-v1-6f9c6589cb-l6rvg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v1-559b64556-f6b4l.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-8ddc4d65c-bztrg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 ratings-v2-mysql-cbc957476-m5j7w.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v1-847fb7c54d-7dwt7.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v2-5c7ff5b77b-5bpc4.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8 reviews-v3-5c5d764c9b-mk8vn.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-install-istio-system-bd58bdcd5-2htkf 1.20.8",
"istioctl ps -i istio-system3 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION details-v1-57f6466bdc-5krth.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 productpage-v1-5b84ccdddf-f8d9t.bookinfo2 Kubernetes SYNCED (2m39s) SYNCED (2m39s) SYNCED (2m34s) SYNCED (2m39s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 ratings-v1-fb764cb99-kx2dr.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v1-8bd5549cf-xqqmd.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v2-7f7cc8bf5c-5rvln.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0 reviews-v3-84f674b88c-ftcqg.bookinfo2 Kubernetes SYNCED (2m40s) SYNCED (2m40s) SYNCED (2m34s) SYNCED (2m40s) IGNORED istiod-ossm3-5b46b6b8cb-gbjx6 1.23.0"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/installing/ossm-running-v2-same-cluster-as-v3_ossm-sidecar-injection-assembly |
Chapter 5. Running Directory Server in FIPS mode | Chapter 5. Running Directory Server in FIPS mode Directory Server fully supports the Federal Information Processing Standard (FIPS) 140-2. When you run Directory Server run in FIPS mode, security-related settings change. For example, SSL is automatically disabled and only TLS 1.2 and 1.3 encryption is used. 5.1. Enabling the FIPS mode To use Directory Server in Federal Information Processing Standard (FIPS) mode, enable the mode in RHEL and Directory Server. Prerequisites You enabled the FIPS mode in RHEL. Procedure Enable the FIPS mode for the network security services (NSS) database: # modutil -dbdir /etc/dirsrv/slapd- instance_name / -fips true Restart the instance: # dsctl instance_name restart Verification Verify that FIPS mode is enabled for the NSS database: # modutil -dbdir /etc/dirsrv/slapd- instance_name / -chkfips true FIPS mode enabled. The command returns FIPS mode enabled , if the module is in FIPS mode. 5.2. Additional resources Federal Information Processing Standard (FIPS) Switching the system to FIPS mode | [
"modutil -dbdir /etc/dirsrv/slapd- instance_name / -fips true",
"dsctl instance_name restart",
"modutil -dbdir /etc/dirsrv/slapd- instance_name / -chkfips true FIPS mode enabled."
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_running-directory-server-in-fips-mode_installing-rhds |
Chapter 7. Additional resources | Chapter 7. Additional resources Planning a Red Hat Decision Manager installation Getting started with decision services Getting started with Red Hat build of OptaPlanner Packaging and deploying an Red Hat Decision Manager project | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/additional_resources |
Chapter 5. Performing operations with the Object Storage service (swift) | Chapter 5. Performing operations with the Object Storage service (swift) The Object Storage service (swift) stores its objects, or data, in containers. Containers are similar to directories in a file system although you cannot nest them. You can store any kind of unstructured data in containers. For example, objects can include photos, text files, or images. Stored objects are not compressed. You can create pseudo-folders in containers to organize data. Pseudo-folders are logical devices for containing objects and creating a nested structure in containers. For example, you might create an Images folder in which to store pictures and a Media folder in which to store videos. You can create one or more containers in each project, and one or more objects or pseudo-folders in each container. Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you, and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. 5.1. Creating private and public containers You can create private or public containers to store data in the Object Storage service (swift): Private: Limits access to a member of a project. Public: Permits access to anyone with the public URL. New containers use the default storage policy. If your Red Hat OpenStack Services on OpenShift (RHOSO) deployment has multiple storage policies defined, for example, a default policy and another policy that enables erasure coding, you can configure a container to use a non-default storage policy. Procedure Create a private or public container: Create a private container to allow members of a project to list the objects in the container, upload, and download objects. Project members include an Identity service (keystone) token for the project in their requests: Replace <container> with the name of your container. Replace <project_id> with the ID of the project. Create a public container to allow anyone with the public URL to list objects in the container and download objects from the container: Configure the container to use a non-default storage policy: Replace <policy> with the name or alias of the policy you want to use for the container. 5.2. Creating pseudo-folders in containers You can create pseudo-folders to organize data in a container in the OpenStack Object Storage service (swift). You create a pseudo-folder by prefixing the names of the objects with the name of the pseudo-folder and a forward slash character (/). For example, if you have a container called container , and you want to organize objects in a pseudo-folder called folder , you add folder/ at the beginning of the name of the object data file: folder/object.ext . You can create nested pseudo-folders in the same way, by including the name of the nested folder and a forward slash at the beginning of the object name, for example, folder/nested_folder/object.ext . The URL of the object will end with container/folder/object.ext or container/folder/nested_folder/object.ext . You can use the GET method with prefix and delimiter parameters to navigate pseudo-folders. Procedure Upload an object and create a pseudo-folder in a container: Replace <container> with the name of your container. Replace <pseudo_folder> with the name of the pseudo-folder you want to create. Replace <object_filename> with the name of your object data file. Upload an object and create a nested pseudo-folder: Replace <nested_folder> with the name of your nested pseudo-folder. View a list of objects, including nested pseudo-folders, in a pseudo-folder: Replace <account> with your namespace for containers, for example, your Red Hat OpenStack Services on OpenShift (RHOSO) project or tenant. 5.3. Deleting containers from the Object Storage service If you want to delete a container from the Object Storage service (swift), ensure that you delete all objects in the container first. For more information, see Deleting objects from the Object Storage service . Procedure Delete a container: Replace <container> with the name of the container you want to delete. 5.4. Uploading objects to containers You can upload object data files to a container or pseudo-folder in the Object Storage service (swift). Alternatively, you can create an object as a placeholder in a container or pseudo-folder, and upload the file to the object later. Procedure Upload an object to a container: Replace <container> with the name of the container. Replace <object_filename> with the name of the object data file. 5.5. Copying objects between containers You can copy an object from a source container or pseudo-folder to a destination container or pseudo-folder in the Object Storage service (swift). Note If you do not specify a unique name for the destination object, it keeps the same name as the source object. If you use a name that already exists in the destination, the new object overwrites the contents of the object. Procedure Copy an object from one container to a destination container: Replace </container/object> with the container and name of the destination object. Replace <container> with the name of the container you want to copy the object from. Replace <object> with the name of the object you want to copy. You can specify multiple objects to copy. 5.6. Deleting objects from the Object Storage service Delete an object from a container in the Object Storage service (swift). Procedure Delete an object from a container: Replace <container> with the name of the container you are deleting the object from. Replace <object> with the name of the object you are deleting. You can specify multiple objects to delete. Optional: To delete all objects in the container, use the --all command option. | [
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack container create <container> --read-acl \"<project_id>\":*\" --write-acl \"<project_id>:*\"",
"openstack container create <container> --read-acl \".r:*,.rlistings\"",
"openstack container set -H \"X-Storage-Policy:<policy>\" <container>",
"openstack object create <container> <pseudo_folder>/<object_filename>",
"openstack object create <container> <pseudo_folder>/<nested_folder>/<object_filename>",
"curl -X GET -i -H \"X-Auth-Token: USDtoken\" USDpublicurl/v1/<account>/<container>?prefix=<folder>&delimiter=/",
"openstack container delete <container>",
"openstack object create <container> <object_filename>",
"openstack copy --destination </container/object> <container> <object> [<object>] [...]",
"openstack object delete [--all] <container> <object> [...]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/assembly_swift-performing-operations-with-the-object-storage-service_glance-creating-os-images |
Release Notes for Red Hat build of Debezium 2.7.3 | Release Notes for Red Hat build of Debezium 2.7.3 Red Hat build of Debezium 2.7.3 What's new in Red Hat build of Debezium Red Hat build of Debezium Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/release_notes_for_red_hat_build_of_debezium_2.7.3/index |
Chapter 89. volume | Chapter 89. volume This chapter describes the commands under the volume command. 89.1. volume backup create Create new volume backup Usage: Table 89.1. Positional arguments Value Summary <volume> Volume to backup (name or id) Table 89.2. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the backup --description <description> Description of the backup --container <container> Optional backup container name --snapshot <snapshot> Snapshot to backup (name or id) --force Allow to back up an in-use volume --incremental Perform an incremental backup Table 89.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 89.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.2. volume backup delete Delete volume backup(s) Usage: Table 89.7. Positional arguments Value Summary <backup> Backup(s) to delete (name or id) Table 89.8. Command arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 89.3. volume backup list List volume backups Usage: Table 89.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --name <name> Filters results by the backup name --status <status> Filters results by the backup status ( creating , available , deleting , error , restoring or error_restoring ) --volume <volume> Filters results by the volume which they backup (name or ID) --marker <volume-backup> The last backup of the page (name or id) --limit <num-backups> Maximum number of backups to display --all-projects Include all projects (admin only) Table 89.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 89.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 89.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.4. volume backup record export Export volume backup details. Backup information can be imported into a new service instance to be able to restore. Usage: Table 89.14. Positional arguments Value Summary <backup> Backup to export (name or id) Table 89.15. Command arguments Value Summary -h, --help Show this help message and exit Table 89.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 89.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.5. volume backup record import Import volume backup details. Exported backup details contain the metadata necessary to restore to a new or rebuilt service instance Usage: Table 89.20. Positional arguments Value Summary <backup_service> Backup service containing the backup. <backup_metadata> Encoded backup metadata from export. Table 89.21. Command arguments Value Summary -h, --help Show this help message and exit Table 89.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 89.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.6. volume backup restore Restore volume backup Usage: Table 89.26. Positional arguments Value Summary <backup> Backup to restore (name or id) <volume> Volume to restore to (name or id) Table 89.27. Command arguments Value Summary -h, --help Show this help message and exit Table 89.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 89.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.7. volume backup set Set volume backup properties Usage: Table 89.32. Positional arguments Value Summary <backup> Backup to modify (name or id) Table 89.33. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New backup name --description <description> New backup description --state <state> New backup state ("available" or "error") (admin only) (This option simply changes the state of the backup in the database with no regard to actual status, exercise caution when using) 89.8. volume backup show Display volume backup details Usage: Table 89.34. Positional arguments Value Summary <backup> Backup to display (name or id) Table 89.35. Command arguments Value Summary -h, --help Show this help message and exit Table 89.36. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 89.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 89.38. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 89.39. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 89.9. volume create Create new volume Usage: Table 89.40. Positional arguments Value Summary <name> Volume name Table 89.41. Command arguments Value Summary -h, --help Show this help message and exit --size <size> Volume size in gb (required unless --snapshot or --source is specified) --type <volume-type> Set the type of volume --image  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7",
"image:///usr/libexec/s2i",
"#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc",
"#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/images/index |
Chapter 5. Configuring secure connections | Chapter 5. Configuring secure connections Securing the connection between a Kafka cluster and a client application helps to ensure the confidentiality, integrity, and authenticity of the communication between the cluster and the client. To achieve a secure connection, you can introduce configuration related to authentication, encryption, and authorization: Authentication Use an authentication mechanism to verify the identity of a client application. Encryption Enable encryption of data in transit between the client and broker using SSL/TLS encryption. Authorization Control client access and operations allowed on Kafka brokers based on the authenticated identity of a client application. Authorization cannot be used without authentication. If authentication is not enabled, it's not possible to determine the identity of clients, and therefore, it's not possible to enforce authorization rules. This means that even if authorization rules are defined, they will not be enforced without authentication. In Streams for Apache Kafka, listeners are used to configure the network connections between the Kafka brokers and the clients. Listener configuration options determine how the brokers listen for incoming client connections and how secure access is managed. The exact configuration required depends on the authentication, encryption, and authorization mechanisms you have chosen. You configure your Kafka brokers and client applications to enable security features. The general outline to secure a client connection to a Kafka cluster is as follows: Install the Streams for Apache Kafka components, including the Kafka cluster. For TLS, generate TLS certificates for each broker and client application. Configure listeners in the broker configuration for secure connection. Configure the client application for secure connection. Configure your client application according to the mechanisms you are using to establish a secure and authenticated connection with the Kafka brokers. The authentication, encryption, and authorization used by a Kafka broker must match those used by a connecting client application. The client application and broker need to agree on the security protocols and configurations for secure communication to take place. For example, a Kafka client and the Kafka broker must use the same TLS versions and cipher suites. Note Mismatched security configurations between the client and broker can result in connection failures or potential security vulnerabilities. It's important to carefully configure and test both the broker and client application to ensure they are properly secured and able to communicate securely. 5.1. Setting up brokers for secure access Before you can configure client applications for secure access, you must first set up the brokers in your Kafka cluster to support the security mechanisms you want to use. To enable secure connections, you create listeners with the appropriate configuration for the security mechanisms. 5.1.1. Establishing a secure connection to a Kafka cluster running on RHEL When using Streams for Apache Kafka on RHEL, the general outline to secure a client connection to a Kafka cluster is as follows: Install the Streams for Apache Kafka components, including the Kafka cluster, on the RHEL server. For TLS, generate TLS certificates for all brokers in the Kafka cluster. Configure listeners in the broker configuration properties file. Configure authentication for your Kafka cluster listeners, such as TLS or SASL SCRAM-SHA-512. Configure authorization for all enabled listeners on the Kafka cluster, such as simple authorization. For TLS, generate TLS certificates for each client application. Create a config.properties file to specify the connection details and authentication credentials used by the client application. Start the Kafka client application and connect to the Kafka cluster. Use the properties defined in the config.properties file to connect to the Kafka broker. Verify that the client can successfully connect to the Kafka cluster and consume and produce messages securely. Additional resources For more information on setting up your brokers, see the following guides: Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper . 5.1.2. Configuring secure listeners for a Kafka cluster on RHEL Use a configuration properties file to configure listeners in Kafka. To configure a secure connection for Kafka brokers, you set the relevant properties for TLS, SASL, and other security-related configurations in this file. Here is an example configuration of a TLS listener specified in a server.properties configuration file for a Kafka broker, with a keystore and truststore in PKCS#12 format: Example listener configuration in server.properties listeners = listener_1://0.0.0.0:9093, listener_2://0.0.0.0:9094 listener.security.protocol.map = listener_1:SSL, listener_2:PLAINTEXT ssl.keystore.type = PKCS12 ssl.keystore.location = /path/to/keystore.p12 ssl.keystore.password = <password> ssl.truststore.type = PKCS12 ssl.truststore.location = /path/to/truststore.p12 ssl.truststore.password = <password> ssl.client.auth = required authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer. super.users = User:superuser The listeners property specifies each listener name, and the IP address and port that the broker listens on. The protocol map tells the listener_1 listener to use the SSL protocol for clients that use TLS encryption. listener_2 provides PLAINTEXT connections for clients that do not use TLS encryption. The keystore contains the broker's private key and certificate. The truststore contains the trusted certificates used to verify the identity of the client application. The ssl.client.auth property enforces client authentication. The Kafka cluster uses simple authorization. The authorizer is set to SimpleAclAuthorizer . A single super user is defined for unconstrained access on all listeners. Streams for Apache Kafka supports the Kafka SimpleAclAuthorizer and custom authorizer plugins. If we prefix the configuration properties with listener.name.<name_of_listener> , the configuration is specific to that listener. This is just a sample configuration. Some configuration options are specific to the type of listener. If you are using OAuth 2.0 or Open Policy Agent (OPA), you must also configure access to the authorization server or OPA server in a specific listener. You can create listeners based on your specific requirements and environment. For more information on listener configuration, see the Apache Kafka documentation . Using ACLs to fine-tune access You can use Access Control Lists (ACLs) to fine-tune access to the Kafka cluster. To create and manage Access Control Lists (ACLs), use the kafka-acls.sh command line tool. The ACLs apply access rules to client applications. In the following example, the first ACL grants read and describe permissions for a specific topic named my-topic . The resource.patternType is set to literal , which means that the resource name must match exactly. The second ACL grants read permissions for a specific consumer group named my-group . The resource.patternType is set to prefix , which means that the resource name must match the prefix. Example ACL configuration bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add \ --allow-principal User:my-user --operation Read --operation Describe --topic my-topic --resource-pattern-type literal \ --allow-principal User:my-user --operation Read --group my-group --resource-pattern-type prefixed 5.1.3. Establishing a secure connection to a Kafka cluster running on OpenShift When using Streams for Apache Kafka on OpenShift, the general outline to secure a client connection to a Kafka cluster is as follows: Use the Cluster Operator to deploy a Kafka cluster in your OpenShift environment. Use the Kafka custom resource to configure and install the cluster and create listeners. Configure authentication for the listeners, such as TLS or SASL SCRAM-SHA-512. The Cluster Operator creates a secret that contains a cluster CA certificate to verify the identity of the Kafka brokers. Configure authorization for all enabled listeners, such as simple authorization. Use the User Operator to create a Kafka user representing your client. Use the KafkaUser custom resource to configure and create the user. Configure authentication for your Kafka user (client) that matches the authentication mechanism of a listener. The User Operator creates a secret that contains a client certificate and private key for the client to use for authentication with the Kafka cluster. Configure authorization for your Kafka user (client) that matches the authorization mechanism of the listener. Authorization rules allow specific operations on the Kafka cluster. Create a config.properties file to specify the connection details and authentication credentials required by the client application to connect to the cluster. Start the Kafka client application and connect to the Kafka cluster. Use the properties defined in the config.properties file to connect to the Kafka broker. Verify that the client can successfully connect to the Kafka cluster and consume and produce messages securely. Additional resources For more information on setting up your brokers, see Deploying and Managing Streams for Apache Kafka on OpenShift . 5.1.4. Configuring secure listeners for a Kafka cluster on OpenShift When you deploy a Kafka custom resource with Streams for Apache Kafka, you add listener configuration to the Kafka spec . Use the listener configuration to secure connections in Kafka. To configure a secure connection for Kafka brokers, set the relevant properties for TLS, SASL, and other security-related configurations at the listener level. External listeners provide client access to a Kafka cluster from outside the OpenShift cluster. Streams for Apache Kafka creates listener services and bootstrap addresses to enable access to the Kafka cluster based on the configuration. For example, you can create external listeners that use the following connection mechanisms: Node ports loadbalancers Openshift routes Here is an example configuration of a nodeport listener for a Kafka resource: Example listener configuration in the Kafka resource apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: plaintext port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: route tls: true authentication: type: tls authorization: type: simple superUsers: - CN=superuser # ... The listeners property is configured with three listeners: plaintext , tls , and external . The external listener is of type nodeport , and it uses TLS for both encryption and authentication. When you create the Kafka cluster with the Cluster Operator, CA certificates are automatically generated. You add cluster CA to the truststore of your client application to verify the identity of the Kafka brokers. Alternatively, you can configure Streams for Apache Kafka to use your own certificates at the broker or listener level. Using certificates at the listener level might be required when client applications require different security configurations. Using certificates at the listener level also adds an additional layer of control and security. Tip Use configuration provider plugins to load configuration data to producer and consumer clients. The configuration Provider plugin loads configuration data from secrets or ConfigMaps. For example, you can tell the provider to automatically get certificates from Strimzi secrets. For more information, see the Streams for Apache Kafka documentation for running onOpenShift. The Kafka cluster uses simple authorization. The authorization property type is set to simple . A single super user is defined for unconstrained access on all listeners. Streams for Apache Kafka supports the Kafka SimpleAclAuthorizer and custom authorizer plugins. This is just a sample configuration. Some configuration options are specific to the type of listener. If you are using OAuth 2.0 or Open Policy Agent (OPA), you must also configure access to the authorization server or OPA server in a specific listener. You can create listeners based on your specific requirements and environment. For more information on listener configuration, see the GenericKafkaListener schema reference . Note When using a route type listener for client access to a Kafka cluster on OpenShift, the TLS passthrough feature is enabled. An OpenShift route is designed to work with the HTTP protocol, but it can also be used to proxy network traffic for other protocols, including the Kafka protocol used by Apache Kafka. The client establishes a connection to the route, and the route forwards the traffic to the broker running in the OpenShift cluster using the TLS Server Name Indication (SNI) extension to get the target hostname. The SNI extension allows the route to correctly identify the target broker for each connection. Using ACLs to fine-tune access You can use Access Control Lists (ACLs) to fine-tune access to the Kafka cluster. To add Access Control Lists (ACLs), you configure the KafkaUser custom resource. When you create a KafkaUser , Streams for Apache Kafka automatically manages the creation and updates the ACLs. The ACLs apply access rules to client applications. In the following example, the first ACL grants read and describe permissions for a specific topic named my-topic . The resource.patternType is set to literal , which means that the resource name must match exactly. The second ACL grants read permissions for a specific consumer group named my-group . The resource.patternType is set to prefix , which means that the resource name must match the prefix. Example ACL configuration in the KafkaUser resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read Note If you specify tls-external as an authentication option when configuring the Kafka user, you can use your own client certificates rather than those generated by the User Operator. 5.2. Setting up clients for secure access After you have set up listeners on your Kafka brokers to support secure connections, the step is to configure your client applications to use these listeners to communicate with the Kafka cluster. This involves providing the appropriate security settings for each client to authenticate with the cluster based on the security mechanisms configured on the listener. 5.2.1. Configuring security protocols Configure the security protocol used by your client application to match the protocol configured on a Kafka broker listener. For example, use SSL (Secure Sockets Layer) for TLS authentication or SASL_SSL for SASL (Simple Authentication and Security Layer over SSL) authentication with TLS encryption. Add a truststore and keystore to your client configuration that supports the authentication mechanism required to access the Kafka cluster. Truststore The truststore contains the public certificates of the trusted certificate authority (CA) that are used to verify the authenticity of a Kafka broker. When the client connects to a secure Kafka broker, it might need to verify the identity of the broker. Keystore The keystore contains the client's private key and its public certificate. When the client wants to authenticate itself to the broker, it presents its own certificate. If you are using TLS authentication, your Kafka client configuration requires a truststore and keystore to connect to a Kafka cluster. If you are using SASL SCRAM-SHA-512, authentication is performed through the exchange of username and password credentials, rather than digital certificates, so a keystore is not required. SCRAM-SHA-512 is a more lightweight mechanism, but it is not as secure as using certificate-based authentication. Note If you have your own certificate infrastructure in place and use certificates from a third-party CA, then the client's default truststore will likely already contain the public CA certificates and you do not need to add them to the client's truststore. The client automatically trusts the server's certificate if it is signed by one of the public CA certificates that is already included in the default truststore. You can create a config.properties file to specify the authentication credentials used by the client application. In the following example, the security.protocol is set to SSL to enable TLS authentication and encryption between the client and broker. The ssl.truststore.location and ssl.truststore.password properties specify the location and password of the truststore. The ssl.keystore.location and ssl.keystore.password properties specify the location and password of the keystore. The PKCS #12 (Public-Key Cryptography Standards #12) file format is used. You can also use the base64-encoded PEM (Privacy Enhanced Mail) format. Example client configuration properties for TLS authentication bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SSL ssl.truststore.location = /path/to/ca.p12 ssl.truststore.password = truststore-password ssl.keystore.location = /path/to/user.p12 ssl.keystore.password = keystore-password client.id = my-client In the following example, the security.protocol is set to SASL_SSL to enable SASL authentication with TLS encryption between the client and broker. If you only need authentication and not encryption, you can use the SASL protocol. The specified SASL mechanism for authentication is SCRAM-SHA-512 . Different authentication mechanisms can be used. sasl.jaas.config properties specify the authentication credentials. Example client configuration properties for SCRAM-SHA-512 authentication bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \ username = "user" \ password = "secret"; ssl.truststore.location = path/to/truststore.p12 ssl.truststore.password = truststore_password ssl.truststore.type = PKCS12 client.id = my-client Note For applications that do not support PEM format, you can use a tool like OpenSSL to convert PEM files to PKCS #12 format. 5.2.2. Configuring permitted TLS versions and cipher suites You can incorporate SSL configuration and cipher suites to further secure TLS-based communication between your client application and a Kafka cluster. Specify the supported TLS versions and cipher suites in the configuration for the Kafka broker. You can also add the configuration to your clients if you wish to limit the TLS versions and cipher suites they use. The configuration on the client should only use protocols and cipher suites that are enabled on the brokers. In the following example, SSL is enabled using security.protocol for communication between Kafka brokers and client applications. You specify cipher suites as a comma-separated list. The ssl.cipher.suites property is a comma-separated list of cipher suites that the client is allowed to use. Example SSL configuration properties for Kafka brokers security.protocol: "SSL" ssl.enabled.protocols: "TLSv1.3", "TLSv1.2" ssl.protocol: "TLSv1.3" ssl.cipher.suites: "TLS_AES_256_GCM_SHA384" The ssl.enabled.protocols property specifies the available TLS versions that can be used for secure communication between the cluster and its clients. In this case, both TLSv1.3 and TLSv1.2 are enabled. The ssl.protocol property sets the default TLS version for all connections, and it must be chosen from the enabled protocols. By default, clients communicate using TLSv1.3 . If a client only supports TLSv1.2, it can still connect to the broker and communicate using that supported version. Similarly, if the configuration is on the client and the broker only supports TLSv1.2, the client uses the supported version. The cipher suites supported by Apache Kafka depend on the version of Kafka you are using and the underlying environment. Check for the latest supported cipher suites that provide the highest level of security. 5.2.3. Using Access Control Lists (ACLs) You do not have to configure anything explicitly for ACLS in your client application. The ACLs are enforced on the server side by the Kafka broker. When the client sends a request to the server to produce or consume data, the server checks the ACLs to determine if the client (user) is authorized to perform the requested operation. If the client is authorized, the request is processed; otherwise, the request is denied and an error is returned. However, the client must still be authenticated and using the appropriate security protocol to enable a secure connection with the Kafka cluster. If you are using Access Control Lists (ACLs) on your Kafka brokers, make sure that ACLs are properly set up to restrict client access to the topics and operations that you want to control. If you are using Open Policy Agent (OPA) policies to manage access, authorization rules are configured in the policies, so you won't need specify ACLs against the Kafka brokers. OAuth 2.0 gives some flexibility: you can use the OAuth 2.0 provider to manage ACLs; or use OAuth 2.0 and Kafka's simple authorization to manage the ACLs. Note ACLs apply to most types of requests and are not limited to produce and consume operations. For example, ACLS can be applied to read operations like describing topics or write operations like creating new topics. 5.2.4. Using OAuth 2.0 for token-based access Use the OAuth 2.0 open standard for authorization with Streams for Apache Kafka to enforce authorization controls through an OAuth 2.0 provider. OAuth 2.0 provides a secure way for applications to access user data stored in other systems. An authorization server can issue access tokens to client applications that grant access to a Kafka cluster. The following steps describe the general approach to set up and use OAuth 2.0 for token validation: Configure the authorization server with broker and client credentials, such as a client ID and secret. Obtain the OAuth 2.0 credentials from the authorization server. Configure listeners on the Kafka brokers with OAuth 2.0 credentials and to interact with the authorization server. Add the Oauth 2.0 dependency to the client library. Configure your Kafka client with OAuth 2.0 credentials and to interact with the authorization server.. Obtain an access token at runtime, which authenticates the client with the OAuth 2.0 provider. If you have a listener configured for OAuth 2.0 on your Kafka broker, you can set up your client application to use OAuth 2.0. In addition to the standard Kafka client configurations to access the Kafka cluster, you must include specific configurations for OAuth 2.0 authentication. You must also make sure that the authorization server you are using is accessible by the Kafka cluster and client application. Specify a SASL (Simple Authentication and Security Layer) security protocol and mechanism. In a production environment, the following settings are recommended: The SASL_SSL protocol for TLS encrypted connections. The OAUTHBEARER mechanism for credentials exchange using a bearer token A JAAS (Java Authentication and Authorization Service) module implements the SASL mechanism. The configuration for the mechanism depends on the authentication method you are using. For example, using credentials exchange you add an OAuth 2.0 access token endpoint, access token, client ID, and client secret. A client connects to the token endpoint (URL) of the authorization server to check if a token is still valid. You also need a truststore that contains the public key certificate of the authorization server for authenticated access. Example client configuration properties for OAauth 2.0 bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = OAUTHBEARER # ... sasl.jaas.config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri = "https://localhost:9443/oauth2/token" \ oauth.access.token = <access_token> \ oauth.client.id = "<client_id>" \ oauth.client.secret = "<client_secret>" \ oauth.ssl.truststore.location = "/<truststore_location>/oauth-truststore.p12" \ oauth.ssl.truststore.password = "<truststore_password>" \ oauth.ssl.truststore.type = "PKCS12" \ Additional resources For more information on setting up your brokers to use OAuth 2.0, see the following guides: Deploying and Upgrading Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper 5.2.5. Using Open Policy Agent (OPA) access policies Use the Open Policy Agent (OPA) policy agent with Streams for Apache Kafka to evaluate requests to connect to your Kafka cluster against access policies. Open Policy Agent (OPA) is a policy engine that manages authorization policies. Policies centralize access control, and can be updated dynamically, without requiring changes to the client application. For example, you can create a policy that allows only certain users (clients) to produce and consume messages to a specific topic. Streams for Apache Kafka uses the Open Policy Agent plugin for Kafka authorization as the authorizer. The following steps describe the general approach to set up and use OPA: Set up an instance of the OPA server. Define policies that provide the authorization rules that govern access to the Kafka cluster. Create configuration for the Kafka brokers to accept OPA authorization and interact with the OPA server. Configure your Kafka client to provide the credentials for authorized access to the Kafka cluster. If you have a listener configured for OPA on your Kafka broker, you can set up your client application to use OPA. In the listener configuration, you specify a URL to connect to the OPA server and authorize your client application. In addition to the standard Kafka client configurations to access the Kafka cluster, you must add the credentials to authenticate with the Kafka broker. The broker checks if the client has the necessary authorization to perform a requested operation, by sending a request to the OPA server to evaluate the authorization policy. You don't need a truststore or keystore to secure communication as the policy engine enforces authorization policies. Example client configuration properties for OPA authorization bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \ username = "user" \ password = "secret"; # ... Note Red Hat does not support the OPA server. Additional resources For more information on setting up your brokers to use OPA, see the following guides: Deploying and Upgrading Streams for Apache Kafka on OpenShift Using Streams for Apache Kafka on RHEL in KRaft mode Using Streams for Apache Kafka on RHEL with ZooKeeper 5.2.6. Using transactions when streaming messages By configuring transaction properties in your brokers and producer client application, you can ensure that messages are processed in a single transaction. Transactions add reliability and consistency to the streaming of messages. Transactions are always enabled on brokers. You can change the default configuration using the following properties: Example Kafka broker configuration properties for transactions transaction.state.log.replication.factor = 3 transaction.state.log.min.isr = 2 transaction.abort.timed.out.transaction.cleanup.interval.ms = 3600000 This is a typical configuration for a production environment, which creates 3 replicas for the internal __transaction_state topic. The \__transaction_state topic stores information about the transactions in progress. A minimum of 2 in-sync replicas are required for the transaction logs. The cleanup interval is the time between checks for timed-out transactions and a clean up the corresponding transaction logs. To add transaction properties to a client configuration, you set the following properties for producers and consumers. Example producer client configuration properties for transactions transactional.id = unique-transactional-id enable.idempotence = true max.in.flight.requests.per.connection = 5 acks = all retries=2147483647 transaction.timeout.ms = 30000 delivery.timeout = 25000 The transactional ID allows the Kafka broker to keep track of the transactions. It is a unique identifier for the producer and should be used with a specific set of partitions. If you need to perform transactions for multiple sets of partitions, you need to use a different transactional ID for each set. Idempotence is enabled to avoid the producer instance creating duplicate messages. With idempotence, messages are tracked using a producer ID and sequence number. When the broker receives the message, it checks the producer ID and sequence number. If a message with the same producer ID and sequence number has already been received, the broker discards the duplicate message. The maximum number of in-flight requests is set to 5 so that transactions are processed in the order they are sent. A partition can have up to 5 in-flight requests without compromising the ordering of messages. By setting acks to all , the producer waits for acknowledgments from all in-sync replicas of the topic partitions to which it is writing before considering the transaction as complete. This ensures that the messages are durably written (committed) to the Kafka cluster, and that they will not be lost even in the event of a broker failure. The transaction timeout specifies the maximum amount of time the client has to complete a transaction before it times out. The delivery timeout specifies the maximum amount of time the producer waits for a broker acknowledgement of message delivery before it times out. To ensure that messages are delivered within the transaction period, set the delivery timeout to be less than the transaction timeout. Consider network latency and message throughput, and allow for temporary failures, when specifying retries for the number of attempts to resend a failed message request. Example consumer client configuration properties for transactions group.id = my-group-id isolation.level = read_committed enable.auto.commit = false The read_committed isolation level specifies that the consumer only reads messages for a transaction that has completed successfully. The consumer does not process any messages that are part of an ongoing or failed transaction. This ensures that the consumer only reads messages that are part of a fully complete transaction. When using transactions to stream messages, it is important to set enable.auto.commit to false . If set to true , the consumer periodically commits offsets without consideration to transactions. This means that the consumer may commit messages before a transaction has fully completed. By setting enable.auto.commit to false , the consumer only reads and commits messages that have been fully written and committed to the topic as part of a transaction. | [
"listeners = listener_1://0.0.0.0:9093, listener_2://0.0.0.0:9094 listener.security.protocol.map = listener_1:SSL, listener_2:PLAINTEXT ssl.keystore.type = PKCS12 ssl.keystore.location = /path/to/keystore.p12 ssl.keystore.password = <password> ssl.truststore.type = PKCS12 ssl.truststore.location = /path/to/truststore.p12 ssl.truststore.password = <password> ssl.client.auth = required authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer. super.users = User:superuser",
"bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:my-user --operation Read --operation Describe --topic my-topic --resource-pattern-type literal --allow-principal User:my-user --operation Read --group my-group --resource-pattern-type prefixed",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plaintext port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: route tls: true authentication: type: tls authorization: type: simple superUsers: - CN=superuser #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read",
"bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SSL ssl.truststore.location = /path/to/ca.p12 ssl.truststore.password = truststore-password ssl.keystore.location = /path/to/user.p12 ssl.keystore.password = keystore-password client.id = my-client",
"bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username = \"user\" password = \"secret\"; ssl.truststore.location = path/to/truststore.p12 ssl.truststore.password = truststore_password ssl.truststore.type = PKCS12 client.id = my-client",
"security.protocol: \"SSL\" ssl.enabled.protocols: \"TLSv1.3\", \"TLSv1.2\" ssl.protocol: \"TLSv1.3\" ssl.cipher.suites: \"TLS_AES_256_GCM_SHA384\"",
"bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = OAUTHBEARER sasl.jaas.config = org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri = \"https://localhost:9443/oauth2/token\" oauth.access.token = <access_token> oauth.client.id = \"<client_id>\" oauth.client.secret = \"<client_secret>\" oauth.ssl.truststore.location = \"/<truststore_location>/oauth-truststore.p12\" oauth.ssl.truststore.password = \"<truststore_password>\" oauth.ssl.truststore.type = \"PKCS12\" \\",
"bootstrap.servers = my-cluster-kafka-bootstrap:9093 security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username = \"user\" password = \"secret\";",
"transaction.state.log.replication.factor = 3 transaction.state.log.min.isr = 2 transaction.abort.timed.out.transaction.cleanup.interval.ms = 3600000",
"transactional.id = unique-transactional-id enable.idempotence = true max.in.flight.requests.per.connection = 5 acks = all retries=2147483647 transaction.timeout.ms = 30000 delivery.timeout = 25000",
"group.id = my-group-id isolation.level = read_committed enable.auto.commit = false"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/assembly-kafka-secure-config-str |
probe::nfsd.proc.read | probe::nfsd.proc.read Name probe::nfsd.proc.read - NFS server reading file for client Synopsis nfsd.proc.read Values size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer version nfs version uid requester's user id count read bytes client_ip the ip address of client proto transfer protocol offset the offset of file gid requester's group id vlen read blocks fh file handle (the first part is the length of the file handle) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-proc-read |
Chapter 56. orchestration | Chapter 56. orchestration This chapter describes the commands under the orchestration command. 56.1. orchestration build info Retrieve build information. Usage: Table 56.1. Command arguments Value Summary -h, --help Show this help message and exit Table 56.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 56.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 56.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.2. orchestration resource type list List resource types. Usage: Table 56.6. Command arguments Value Summary -h, --help Show this help message and exit --filter <key=value> Filter parameters to apply on returned resource types. This can be specified multiple times. It can be any of name, version or support_status --long Show resource types with corresponding description of each resource type. Table 56.7. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 56.8. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 56.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.10. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.3. orchestration resource type show Show details and optionally generate a template for a resource type. Usage: Table 56.11. Positional arguments Value Summary <resource-type> Resource type to show details for Table 56.12. Command arguments Value Summary -h, --help Show this help message and exit --template-type <template-type> Optional template type to generate, hot or cfn --long Show resource type with corresponding description. Table 56.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 56.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 56.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.4. orchestration service list List the Heat engines. Usage: Table 56.17. Command arguments Value Summary -h, --help Show this help message and exit Table 56.18. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 56.19. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 56.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.5. orchestration template function list List the available functions. Usage: Table 56.22. Positional arguments Value Summary <template-version> Template version to get the functions for Table 56.23. Command arguments Value Summary -h, --help Show this help message and exit --with_conditions Show condition functions for template. Table 56.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 56.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 56.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.6. orchestration template validate Validate a template Usage: Table 56.28. Command arguments Value Summary -h, --help Show this help message and exit -e <environment>, --environment <environment> Path to the environment. can be specified multiple times --show-nested Resolve parameters from nested templates as well --parameter <key=value> Parameter values used to create the stack. this can be specified multiple times -s <files-container>, --files-container <files-container> Swift files container name. local files other than root template would be ignored. If other files are not found in swift, heat engine would raise an error. --ignore-errors <error1,error2,... > List of heat errors to ignore -t <template>, --template <template> Path to the template Table 56.29. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to yaml -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 56.30. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.31. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 56.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 56.7. orchestration template version list List the available template versions. Usage: Table 56.33. Command arguments Value Summary -h, --help Show this help message and exit Table 56.34. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 56.35. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 56.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 56.37. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack orchestration build info [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack orchestration resource type list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--filter <key=value>] [--long]",
"openstack orchestration resource type show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--template-type <template-type>] [--long] <resource-type>",
"openstack orchestration service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack orchestration template function list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--with_conditions] <template-version>",
"openstack orchestration template validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-e <environment>] [--show-nested] [--parameter <key=value>] [-s <files-container>] [--ignore-errors <error1,error2,...>] -t <template>",
"openstack orchestration template version list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/orchestration |
probe::netdev.transmit | probe::netdev.transmit Name probe::netdev.transmit - Network device transmitting buffer Synopsis netdev.transmit Values protocol The protocol of this packet(defined in include/linux/if_ether.h). length The length of the transmit buffer. truesize The size of the data to be transmitted. dev_name The name of the device. e.g: eth0, ath1. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-transmit |
Chapter 96. ElSQL Component | Chapter 96. ElSQL Component Available as of Camel version 2.16 The elsql: component is an extension to the existing SQL Component that uses ElSql to define the SQL queries. This component uses spring-jdbc behind the scenes for the actual SQL handling. This component can be used as a Transactional Client . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-elsql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The SQL component uses the following endpoint URI notation: sql:elSqlName:resourceUri[?options] You can append query options to the URI in the following format, ?option=value&option=value&... The parameters to the SQL queries are named parameters in the elsql mapping files, and maps to corresponding keys from the Camel message, in the given precedence: Camel 2.16.1: from message body if Simple expression. from message body if its a `java.util.Map`3. from message headers If a named parameter cannot be resolved, then an exception is thrown. 96.1. Options The ElSQL component supports 5 options, which are listed below. Name Description Default Type databaseVendor (common) To use a vendor specific com.opengamma.elsql.ElSqlConfig ElSqlDatabaseVendor dataSource (common) Sets the DataSource to use to communicate with the database. DataSource elSqlConfig (advanced) To use a specific configured ElSqlConfig. It may be better to use the databaseVendor option instead. ElSqlConfig resourceUri (common) The resource file which contains the elsql SQL statements to use. You can specify multiple resources separated by comma. The resources are loaded on the classpath by default, you can prefix with file: to load from file system. Notice you can set this option on the component and then you do not have to configure this on the endpoint. String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The ElSQL endpoint is configured using URI syntax: with the following path and query parameters: 96.1.1. Path Parameters (2 parameters): Name Description Default Type elsqlName Required The name of the elsql to use (is NAMED in the elsql file) String resourceUri The resource file which contains the elsql SQL statements to use. You can specify multiple resources separated by comma. The resources are loaded on the classpath by default, you can prefix with file: to load from file system. Notice you can set this option on the component and then you do not have to configure this on the endpoint. String 96.1.2. Query Parameters (47 parameters): Name Description Default Type allowNamedParameters (common) Whether to allow using named parameters in the queries. true boolean databaseVendor (common) To use a vendor specific com.opengamma.elsql.ElSqlConfig ElSqlDatabaseVendor dataSource (common) Sets the DataSource to use to communicate with the database. DataSource dataSourceRef (common) Deprecated Sets the reference to a DataSource to lookup from the registry, to use for communicating with the database. String outputClass (common) Specify the full package and class name to use as conversion when outputType=SelectOne. String outputHeader (common) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. String outputType (common) Make the output of consumer or producer to SelectList as List of Map, or SelectOne as single Java object in the following way:a) If the query has only single column, then that JDBC Column object is returned. (such as SELECT COUNT( ) FROM PROJECT will return a Long object.b) If the query has more than one column, then it will return a Map of that result.c) If the outputClass is set, then it will convert the query result into an Java bean object by calling all the setters that match the column names.It will assume your class has a default constructor to create instance with.d) If the query resulted in more than one rows, it throws an non-unique result exception.StreamList streams the result of the query using an Iterator. This can be used with the Splitter EIP in streaming mode to process the ResultSet in streaming fashion. SelectList SqlOutputType separator (common) The separator to use when parameter values is taken from message body (if the body is a String type), to be inserted at # placeholders.Notice if you use named parameters, then a Map type is used instead. The default value is comma , char breakBatchOnConsumeFail (consumer) Sets whether to break batch if onConsume failed. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean expectedUpdateCount (consumer) Sets an expected update count to validate when using onConsume. -1 int maxMessagesPerPoll (consumer) Sets the maximum number of messages to poll int onConsume (consumer) After processing each row then this query can be executed, if the Exchange was processed successfully, for example to mark the row as processed. The query can have parameter. String onConsumeBatchComplete (consumer) After processing the entire batch, this query can be executed to bulk update rows etc. The query cannot have parameters. String onConsumeFailed (consumer) After processing each row then this query can be executed, if the Exchange failed, for example to mark the row as failed. The query can have parameter. String routeEmptyResultSet (consumer) Sets whether empty resultset should be allowed to be sent to the hop. Defaults to false. So the empty resultset will be filtered out. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumerbreak out processing any further exchanges to cause a rollback eager. false boolean useIterator (consumer) Sets how resultset should be delivered to route. Indicates delivery as either a list or individual object. defaults to true. true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy processingStrategy (consumer) Allows to plugin to use a custom org.apache.camel.component.sql.SqlProcessingStrategy to execute queries when the consumer has processed the rows/batch. SqlProcessingStrategy batch (producer) Enables or disables batch mode false boolean noop (producer) If set, will ignore the results of the SQL query and use the existing IN message as the OUT message for the continuation of processing false boolean useMessageBodyForSql (producer) Whether to use the message body as the SQL and then headers for parameters. If this option is enabled then the SQL in the uri is not used. false boolean alwaysPopulateStatement (advanced) If enabled then the populateStatement method from org.apache.camel.component.sql.SqlPrepareStatementStrategy is always invoked, also if there is no expected parameters to be prepared. When this is false then the populateStatement is only invoked if there is 1 or more expected parameters to be set; for example this avoids reading the message body/headers for SQL queries with no parameters. false boolean elSqlConfig (advanced) To use a specific configured ElSqlConfig. It may be better to use the databaseVendor option instead. ElSqlConfig parametersCount (advanced) If set greater than zero, then Camel will use this count value of parameters to replace instead of querying via JDBC metadata API. This is useful if the JDBC vendor could not return correct parameters count, then user may override instead. int placeholder (advanced) Specifies a character that will be replaced to in SQL query. Notice, that it is simple String.replaceAll() operation and no SQL parsing is involved (quoted strings will also change). # String prepareStatementStrategy (advanced) Allows to plugin to use a custom org.apache.camel.component.sql.SqlPrepareStatementStrategy to control preparation of the query and prepared statement. SqlPrepareStatement Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean templateOptions (advanced) Configures the Spring JdbcTemplate with the key/values from the Map Map usePlaceholder (advanced) Sets whether to use placeholder and replace all placeholder characters with sign in the SQL queries. true boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 96.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.elsql.data-source Sets the DataSource to use to communicate with the database. The option is a javax.sql.DataSource type. String camel.component.elsql.database-vendor To use a vendor specific com.opengamma.elsql.ElSqlConfig ElSqlDatabaseVendor camel.component.elsql.el-sql-config To use a specific configured ElSqlConfig. It may be better to use the databaseVendor option instead. The option is a com.opengamma.elsql.ElSqlConfig type. String camel.component.elsql.enabled Enable elsql component true Boolean camel.component.elsql.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.elsql.resource-uri The resource file which contains the elsql SQL statements to use. You can specify multiple resources separated by comma. The resources are loaded on the classpath by default, you can prefix with file: to load from file system. Notice you can set this option on the component and then you do not have to configure this on the endpoint. String 96.3. Result of the query For select operations, the result is an instance of List<Map<String, Object>> type, as returned by the JdbcTemplate.queryForList() method. For update operations, the result is the number of updated rows, returned as an Integer . By default, the result is placed in the message body. If the outputHeader parameter is set, the result is placed in the header. This is an alternative to using a full message enrichment pattern to add headers, it provides a concise syntax for querying a sequence or some other small value into a header. It is convenient to use outputHeader and outputType together: 96.4. Header values When performing update operations, the SQL Component stores the update count in the following message headers: Header Description CamelSqlUpdateCount The number of rows updated for update operations, returned as an Integer object. CamelSqlRowCount The number of rows returned for select operations, returned as an Integer object. 96.4.1. Sample In the given route below, we want to get all the projects from the projects table. Notice the SQL query has 2 named parameters, :#lic and :#min. Camel will then lookup for these parameters from the message body or message headers. Notice in the example above we set two headers with constant value for the named parameters: from("direct:projects") .setHeader("lic", constant("ASF")) .setHeader("min", constant(123)) .to("elsql:projects:com/foo/orders.elsql") And the elsql mapping file @NAME(projects) SELECT * FROM projects WHERE license = :lic AND id > :min ORDER BY id Though if the message body is a java.util.Map then the named parameters will be taken from the body. from("direct:projects") .to("elsql:projects:com/foo/orders.elsql") 96.5. Using expression parameters in producers In from Camel 2.16.1 onwards you can use Simple expressions as well, which allows to use an OGNL like notation on the message body, where it assumes to have getLicense and getMinimum methods: @NAME(projects) SELECT * FROM projects WHERE license = :USD{body.license} AND id > :USD{body.minimum} ORDER BY id 96.5.1. Using expression parameters in consumers Available as of Camel 2.23 When using the ElSql component as consumer, you can now also use expression parameters (simple language) to build dynamic query parameters, such as calling a method on a bean to retrieve an id, date or something. For example in the sample below we call the nextId method on the bean myIdGenerator: @NAME(projectsByIdBean) SELECT * FROM projects WHERE id = :USD{bean#myIdGenerator.nextId} Important Notice in the bean syntax above, we must use # instead of : in the simple expression. This is because Spring query parameter parser is in-use which will separate parameters on colon. Also pay attention that Spring query parser will invoke the bean twice for each query. And the bean has the following method: public static class MyIdGenerator { private int id = 1; public int nextId() { // spring will call this twice, one for initializing query and 2nd for actual value id++; return id / 2; } Notice that there is no existing Exchange with message body and headers, so the simple expression you can use in the consumer are most useable for calling bean methods as in this example. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-elsql</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"sql:elSqlName:resourceUri[?options]",
"elsql:elsqlName:resourceUri",
"from(\"direct:projects\") .setHeader(\"lic\", constant(\"ASF\")) .setHeader(\"min\", constant(123)) .to(\"elsql:projects:com/foo/orders.elsql\")",
"@NAME(projects) SELECT * FROM projects WHERE license = :lic AND id > :min ORDER BY id",
"from(\"direct:projects\") .to(\"elsql:projects:com/foo/orders.elsql\")",
"@NAME(projects) SELECT * FROM projects WHERE license = :USD{body.license} AND id > :USD{body.minimum} ORDER BY id",
"@NAME(projectsByIdBean) SELECT * FROM projects WHERE id = :USD{bean#myIdGenerator.nextId}",
"public static class MyIdGenerator { private int id = 1; public int nextId() { // spring will call this twice, one for initializing query and 2nd for actual value id++; return id / 2; }"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/elsql-component |
Chapter 5. Sending and receiving messages from a topic | Chapter 5. Sending and receiving messages from a topic Send messages to and receive messages from a Kafka cluster installed on OpenShift. This procedure describes how to use Kafka clients to produce and consume messages. You can deploy clients to OpenShift or connect local Kafka clients to the OpenShift cluster. You can use either or both options to test your Kafka cluster installation. For the local clients, you access the Kafka cluster using an OpenShift route connection. You will use the oc command-line tool to deploy and run the Kafka clients. Prerequisites You have created a Kafka cluster on OpenShift . For a local producer and consumer: You have created a route for external access to the Kafka cluster running in OpenShift . You can access the latest Kafka client binaries from the AMQ Streams software downloads page . Sending and receiving messages from Kafka clients deployed to the OpenShift cluster Deploy producer and consumer clients to the OpenShift cluster. You can then use the clients to send and receive messages from the Kafka cluster in the same namespace. The deployment uses the AMQ Streams container image for running Kafka. Use the oc command-line interface to deploy a Kafka producer. This example deploys a Kafka producer that connects to the Kafka cluster my-cluster A topic named my-topic is created. Deploying a Kafka producer to OpenShift oc run kafka-producer -ti \ --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 \ --rm=true \ --restart=Never \ -- bin/kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic Note If the connection fails, check that the Kafka cluster is running and the correct cluster name is specified as the bootstrap-server . From the command prompt, enter a number of messages. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-producer to view the producer pod details. Select Logs page to check the messages you entered are present. Use the oc command-line interface to deploy a Kafka consumer. Deploying a Kafka consumer to OpenShift oc run kafka-consumer -ti \ --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 \ --rm=true \ --restart=Never \ -- bin/kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-bootstrap:9092 \ --topic my-topic \ --from-beginning The consumer consumed messages produced to my-topic . From the command prompt, confirm that you see the incoming messages in the consumer console. Navigate in the OpenShift web console to the Home > Projects page and select the amq-streams-kafka project you created. From the list of pods, click kafka-consumer to view the consumer pod details. Select the Logs page to check the messages you consumed are present. Sending and receiving messages from Kafka clients running locally Use a command-line interface to run a Kafka producer and consumer on a local machine. Download and extract the AMQ Streams <version> binaries from the AMQ Streams software downloads page . Unzip the amq-streams- <version> -bin.zip file to any destination. Open a command-line interface, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS. Add the properties that are required for accessing the Kafka broker with an OpenShift route . Use the hostname and port 443 for the OpenShift route you are using. Use the password and reference to the truststore you created for the broker certificate. Starting a local Kafka producer kafka-console-producer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --producer-property security.protocol=SSL \ --producer-property ssl.truststore.password=password \ --producer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic Type your message into the command-line interface where the producer is running. Press enter to send the message. Open a new command-line interface tab or window, and start the Kafka console consumer to receive the messages. Use the same connection details as the producer. Starting a local Kafka consumer kafka-console-consumer.sh \ --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 \ --consumer-property security.protocol=SSL \ --consumer-property ssl.truststore.password=password \ --consumer-property ssl.truststore.location=client.truststore.jks \ --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. Press Crtl+C to exit the Kafka console producer and consumer. | [
"run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning",
"kafka-console-producer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=client.truststore.jks --topic my-topic",
"kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=client.truststore.jks --topic my-topic --from-beginning"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/getting_started_with_amq_streams_on_openshift/proc-using-amq-streams-str |
Chapter 72. CouchDB Component | Chapter 72. CouchDB Component Available as of Camel version 2.11 The couchdb: component allows you to treat CouchDB instances as a producer or consumer of messages. Using the lightweight LightCouch API, this camel component has the following features: As a consumer, monitors couch changesets for inserts, updates and deletes and publishes these as messages into camel routes. As a producer, can save, update, from Camel 2.18 delete (by using CouchDbMethod with DELETE value) documents and from Camel 2.22 get document by id (by using CouchDbMethod with GET value) into couch. Can support as many endpoints as required, eg for multiple databases across multiple instances. Ability to have events trigger for only deletes, only inserts/updates or all (default). Headers set for sequenceId, document revision, document id, and HTTP method type. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-couchdb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 72.1. URI format couchdb:http://hostname[:port]/database?[options] Where hostname is the hostname of the running couchdb instance. Port is optional and if not specified then defaults to 5984. 72.2. Options The CouchDB component has no options. The CouchDB endpoint is configured using URI syntax: with the following path and query parameters: 72.2.1. Path Parameters (4 parameters): Name Description Default Type protocol Required The protocol to use for communicating with the database. String hostname Required Hostname of the running couchdb instance String port Port number for the running couchdb instance 5984 int database Required Name of the database to use String 72.2.2. Query Parameters (12 parameters): Name Description Default Type createDatabase (common) Creates the database if it does not already exist false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean deletes (consumer) Document deletes are published as events true boolean heartbeat (consumer) How often to send an empty message to keep socket alive in millis 30000 long since (consumer) Start tracking changes immediately after the given update sequence. The default, null, will start monitoring from the latest sequence. String style (consumer) Specifies how many revisions are returned in the changes array. The default, main_only, will only return the current winning revision; all_docs will return all leaf revisions (including conflicts and deleted former conflicts.) main_only String updates (consumer) Document inserts/updates are published as events true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean password (security) Password for authenticated databases String username (security) Username in case of authenticated databases String 72.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.couchdb.enabled Enable couchdb component true Boolean camel.component.couchdb.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 72.4. Headers The following headers are set on exchanges during message transport. Property Value CouchDbDatabase the database the message came from CouchDbSeq the couchdb changeset sequence number of the update / delete message CouchDbId the couchdb document id CouchDbRev the couchdb document revision CouchDbMethod the method (delete / update) Headers are set by the consumer once the message is received. The producer will also set the headers for downstream processors once the insert/update has taken place. Any headers set prior to the producer are ignored. That means for example, if you set CouchDbId as a header, it will not be used as the id for insertion, the id of the document will still be used. 72.5. Message Body The component will use the message body as the document to be inserted. If the body is an instance of String, then it will be marshalled into a GSON object before insert. This means that the string must be valid JSON or the insert / update will fail. If the body is an instance of a com.google.gson.JsonElement then it will be inserted as is. Otherwise the producer will throw an exception of unsupported body type. 72.6. Samples For example if you wish to consume all inserts, updates and deletes from a CouchDB instance running locally, on port 9999 then you could use the following: from("couchdb:http://localhost:9999").process(someProcessor); If you were only interested in deletes, then you could use the following from("couchdb:http://localhost:9999?updates=false").process(someProcessor); If you wanted to insert a message as a document, then the body of the exchange is used from("someProducingEndpoint").process(someProcessor).to("couchdb:http://localhost:9999") | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-couchdb</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"couchdb:http://hostname[:port]/database?[options]",
"couchdb:protocol:hostname:port/database",
"from(\"couchdb:http://localhost:9999\").process(someProcessor);",
"from(\"couchdb:http://localhost:9999?updates=false\").process(someProcessor);",
"from(\"someProducingEndpoint\").process(someProcessor).to(\"couchdb:http://localhost:9999\")"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/couchdb-component |
4.6. VIRTUAL SERVERS | 4.6. VIRTUAL SERVERS The VIRTUAL SERVERS panel displays information for each currently defined virtual server. Each table entry shows the status of the virtual server, the server name, the virtual IP assigned to the server, the netmask of the virtual IP, the port number to which the service communicates, the protocol used, and the virtual device interface. Figure 4.5. The VIRTUAL SERVERS Panel Each server displayed in the VIRTUAL SERVERS panel can be configured on subsequent screens or subsections . To add a service, click the ADD button. To remove a service, select it by clicking the radio button to the virtual server and click the DELETE button. To enable or disable a virtual server in the table click its radio button and click the (DE)ACTIVATE button. After adding a virtual server, you can configure it by clicking the radio button to its left and clicking the EDIT button to display the VIRTUAL SERVER subsection. 4.6.1. The VIRTUAL SERVER Subsection The VIRTUAL SERVER subsection panel shown in Figure 4.6, "The VIRTUAL SERVERS Subsection" allows you to configure an individual virtual server. Links to subsections related specifically to this virtual server are located along the top of the page. But before configuring any of the subsections related to this virtual server, complete this page and click on the ACCEPT button. Figure 4.6. The VIRTUAL SERVERS Subsection Name Enter a descriptive name to identify the virtual server. This name is not the host name for the machine, so make it descriptive and easily identifiable. You can even reference the protocol used by the virtual server, such as HTTP. Application port Enter the port number through which the service application will listen. Since this example is for HTTP services, port 80 is used. Protocol Choose between UDP and TCP in the drop-down menu. Web servers typically communicate by means of the TCP protocol, so this is selected in the example above. Virtual IP Address Enter the virtual server's floating IP address in this text field. Virtual IP Network Mask Set the netmask for this virtual server with the drop-down menu. Firewall Mark Do not enter a firewall mark integer value in this field unless you are bundling multi-port protocols or creating a multi-port virtual server for separate, but related protocols. In this example, the above virtual server has a Firewall Mark of 80 because we are bundling connections to HTTP on port 80 and to HTTPS on port 443 using the firewall mark value of 80. When combined with persistence, this technique will ensure users accessing both insecure and secure webpages are routed to the same real server, preserving state. Warning Entering a firewall mark in this field allows IPVS to recognize that packets bearing this firewall mark are treated the same, but you must perform further configuration outside of the Piranha Configuration Tool to actually assign the firewall marks. See Section 3.4, "Multi-port Services and Load Balancer Add-On" for instructions on creating multi-port services and Section 3.5, "Configuring FTP" for creating a highly available FTP virtual server. Device Enter the name of the network device to which you want the floating IP address defined the Virtual IP Address field to bind. You should alias the public floating IP address to the Ethernet interface connected to the public network. In this example, the public network is on the eth0 interface, so eth0:1 should be entered as the device name. Re-entry Time Enter an integer value which defines the length of time, in seconds, before the active LVS router attempts to bring a real server back into the pool after a failure. Service Timeout Enter an integer value which defines the length of time, in seconds, before a real server is considered dead and removed from the pool. Quiesce server When the Quiesce server radio button is selected, a real server weight will be set to 0 when it is unavailable. This effectively disables the real server. If the real server later becomes available, the real servers will be re-enabled by restoring its original weight. If Quiesce server is disabled, the failed real server will be removed from the server table. If and when the unavailable real server becomes available, it will be added back to the virtual server table. Load monitoring tool The LVS router can monitor the load on the various real servers by using either rup or ruptime . If you select rup from the drop-down menu, each real server must run the rstatd service. If you select ruptime , each real server must run the rwhod service. Warning Load monitoring is not the same as load balancing and can result in hard to predict scheduling behavior when combined with weighted scheduling algorithms. Also, if you use load monitoring, the real servers must be Linux machines. Scheduling Select your preferred scheduling algorithm from the drop-down menu. The default is Weighted least-connection . For more information on scheduling algorithms, see Section 1.3.1, "Scheduling Algorithms" . Persistence If an administrator needs persistent connections to the virtual server during client transactions, enter the number of seconds of inactivity allowed to lapse before a connection times out in this text field. Important If you entered a value in the Firewall Mark field above, you should enter a value for persistence as well. Also, be sure that if you use firewall marks and persistence together, that the amount of persistence is the same for each virtual server with the firewall mark. For more on persistence and firewall marks, see Section 1.5, "Persistence and Firewall Marks" . Persistence Network Mask To limit persistence to particular subnet, select the appropriate network mask from the drop-down menu. Note Before the advent of firewall marks, persistence limited by subnet was a crude way of bundling connections. Now, it is best to use persistence in relation to firewall marks to achieve the same result. Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose changes when selecting a new panel. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-piranha-virtservs-vsa |
Chapter 2. Node [v1] | Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.images Description List of container images on this node Type array 2.1.21. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.22. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.23. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.24. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes HTTP method DELETE Description delete collection of Node Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.3. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Node schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Node HTTP method DELETE Description delete a Node Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Node schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description read status of the specified Node Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body Node schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/node_apis/node-v1 |
probe::nfs.proc.lookup | probe::nfs.proc.lookup Name probe::nfs.proc.lookup - NFS client opens/searches a file on server Synopsis nfs.proc.lookup Values bitmask1 V4 bitmask representing the set of attributes supported on this filesystem bitmask0 V4 bitmask representing the set of attributes supported on this filesystem filename the name of file which client opens/searches on server server_ip IP address of server prot transfer protocol name_len the length of file name version NFS version | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-lookup |
Chapter 12. Management of iSCSI gateway using the Ceph Orchestrator (Limited Availability) | Chapter 12. Management of iSCSI gateway using the Ceph Orchestrator (Limited Availability) As a storage administrator, you can use Ceph Orchestrator to deploy the iSCSI gateway. The iSCSI Gateway presents a Highly Available (HA) iSCSI target that exports RADOS Block Device (RBD) images as SCSI disks. You can deploy an iSCSI gateway by either using the placement specification or the service specification, like an YAML file. Note This technology is Limited Availability. See the Deprecated functionality chapter for additional information. This section covers the following administrative tasks: Deploying the iSCSI gateway using the placement specification . Deploying the iSCSI gateway using the service specification . Removing the iSCSI gateway using the Ceph Orchestrator . 12.1. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. 12.2. Deploying the iSCSI gateway using the command line interface Using the Ceph Orchestrator, you can deploy the iSCSI gateway using the ceph orch command in the command line interface. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example Create the pool: Syntax Example Deploy iSCSI gateway using command line interface: Syntax Example Verification List the service: Example List the hosts and process: Syntax Example 12.3. Deploying the iSCSI gateway using the service specification Using the Ceph Orchestrator, you can deploy the iSCSI gateway using the service specification. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor and OSD daemons are deployed. Procedure Create the iscsi.yml file: Example Edit the iscsi.yml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the following directory: Syntax Example Deploy iSCSI gateway using service specification: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 12.4. Removing the iSCSI gateway using the Ceph Orchestrator You can remove the iSCSI gateway daemons using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one iSCSI gateway daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example List the service: Example Remove the service Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the iSCSI gateway using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the iSCSI gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. | [
"cephadm shell",
"ceph osd pool create POOL_NAME",
"ceph osd pool create mypool",
"ceph orch apply iscsi POOLNAME admin admin --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply iscsi mypool admin admin --placement=\"1 host01\"",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=iscsi",
"touch iscsi.yml",
"service_type: iscsi service_id: iscsi placement: hosts: - HOST_NAME_1 - HOST_NAME_2 spec: pool: POOL_NAME # RADOS pool where ceph-iscsi config data is stored. trusted_ip_list: \" IP_ADDRESS_1 , IP_ADDRESS_2 \" # optional api_port: ... # optional api_user: API_USERNAME # optional api_password: API_PASSWORD # optional api_secure: true/false # optional ssl_cert: | # optional ssl_key: | # optional",
"service_type: iscsi service_id: iscsi placement: hosts: - host01 spec: pool: mypool",
"cephadm shell --mount iscsi.yaml:/var/lib/ceph/iscsi.yaml",
"cd /var/lib/ceph/ DAEMON_PATH /",
"cd /var/lib/ceph/",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i iscsi.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=iscsi",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm iscsi.iscsi",
"ceph orch ps",
"ceph orch ps"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/management-of-iscsi-gateway-using-the-ceph-orchestrator |
Chapter 2. Getting started | Chapter 2. Getting started 2.1. AMQ Streams distribution AMQ Streams is distributed as single ZIP file. This ZIP file contains AMQ Streams components: Apache ZooKeeper Apache Kafka Apache Kafka Connect Apache Kafka MirrorMaker Kafka Bridge Kafka Exporter 2.2. Downloading an AMQ Streams Archive An archived distribution of AMQ Streams is available for download from the Red Hat website. You can download a copy of the distribution by following the steps below. Procedure Download the latest version of the Red Hat AMQ Streams archive from the Customer Portal . 2.3. Installing AMQ Streams Follow this procedure to install the latest version of AMQ Streams on Red Hat Enterprise Linux. For instructions on upgrading an existing cluster to AMQ Streams 1.8, see AMQ Streams and Kafka upgrades . Prerequisites Download the installation archive . Review the Section 1.3, "Supported Configurations" Procedure Add new kafka user and group. sudo groupadd kafka sudo useradd -g kafka kafka sudo passwd kafka Create directory /opt/kafka . sudo mkdir /opt/kafka Create a temporary directory and extract the contents of the AMQ Streams ZIP file. mkdir /tmp/kafka unzip amq-streams_y.y-x.x.x.zip -d /tmp/kafka Move the extracted contents into /opt/kafka directory and delete the temporary directory. sudo mv /tmp/kafka/ kafka_y.y-x.x.x /* /opt/kafka/ rm -r /tmp/kafka Change the ownership of the /opt/kafka directory to the kafka user. sudo chown -R kafka:kafka /opt/kafka Create directory /var/lib/zookeeper for storing ZooKeeper data and set its ownership to the kafka user. sudo mkdir /var/lib/zookeeper sudo chown -R kafka:kafka /var/lib/zookeeper Create directory /var/lib/kafka for storing Kafka data and set its ownership to the kafka user. sudo mkdir /var/lib/kafka sudo chown -R kafka:kafka /var/lib/kafka 2.4. Data storage considerations An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams. AMQ Streams requires block storage and works well with cloud-based block storage solutions, such as Amazon Elastic Block Store (EBS). The use of file storage is not recommended. Choose local storage when possible. If local storage is not available, you can use a Storage Area Network (SAN) accessed by a protocol such as Fibre Channel or iSCSI. 2.4.1. Apache Kafka and ZooKeeper storage support Use separate disks for Apache Kafka and ZooKeeper. Kafka supports JBOD (Just a Bunch of Disks) storage, a data storage configuration of multiple disks or volumes. JBOD provides increased data storage for Kafka brokers. It can also improve performance. Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. Note You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. 2.4.2. File systems It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results. Additional resources For more information about XFS, see The XFS File System . 2.5. Running a single node AMQ Streams cluster This procedure shows how to run a basic AMQ Streams cluster consisting of a single Apache ZooKeeper node and a single Apache Kafka node, both running on the same host. The default configuration files are used for ZooKeeper and Kafka. Warning A single node AMQ Streams cluster does not provide reliability and high availability and is suitable only for development purposes. Prerequisites AMQ Streams is installed on the host Running the cluster Edit the ZooKeeper configuration file /opt/kafka/config/zookeeper.properties . Set the dataDir option to /var/lib/zookeeper/ : dataDir=/var/lib/zookeeper/ Edit the Kafka configuration file /opt/kafka/config/server.properties . Set the log.dirs option to /var/lib/kafka/ : log.dirs=/var/lib/kafka/ Switch to the kafka user: su - kafka Start ZooKeeper: /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties Check that ZooKeeper is running: jcmd | grep zookeeper Returns: number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties Start Kafka: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Check that Kafka is running: jcmd | grep kafka Returns: number kafka.Kafka /opt/kafka/config/server.properties Additional resources For more information about installing AMQ Streams, see Section 2.3, "Installing AMQ Streams" . For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . 2.6. Using the cluster This procedure describes how to start the Kafka console producer and consumer clients and use them to send and receive several messages. A new topic is automatically created in step one. Topic auto-creation is controlled using the auto.create.topics.enable configuration property (set to true by default). Alternatively, you can configure and create topics before using the cluster. For more information, see Topics . Prerequisites AMQ Streams is installed on the host ZooKeeper and Kafka are running Procedure Start the Kafka console producer and configure it to send messages to a new topic: /opt/kafka/bin/kafka-console-producer.sh --broker-list <bootstrap-address> --topic <topic-name> For example: /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic Enter several messages into the console. Press Enter to send each individual message to your new topic: >message 1 >message 2 >message 3 >message 4 When Kafka creates a new topic automatically, you might receive a warning that the topic does not exist: WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) The warning should not reappear after you send further messages. In a new terminal window, start the Kafka console consumer and configure it to read messages from the beginning of your new topic. /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning For example: /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning The incoming messages display in the consumer console. Switch to the producer console and send additional messages. Check that they display in the consumer console. Stop the Kafka console producer and then the consumer by pressing Ctrl+C . 2.7. Stopping the AMQ Streams services You can stop the Kafka and ZooKeeper services by running a script. All connections to the Kafka and ZooKeeper services will be terminated. Prerequisites AMQ Streams is installed on the host ZooKeeper and Kafka are up and running Procedure Stop the Kafka broker. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the Kafka broker is stopped. jcmd | grep kafka Stop ZooKeeper. su - kafka /opt/kafka/bin/zookeeper-server-stop.sh 2.8. Configuring AMQ Streams Prerequisites AMQ Streams is downloaded and installed on the host Procedure Open ZooKeeper and Kafka broker configuration files in a text editor. The configuration files are located at : ZooKeeper /opt/kafka/config/zookeeper.properties Kafka /opt/kafka/config/server.properties Edit the configuration options. The configuration files are in the Java properties format. Every configuration option should be on separate line in the following format: Lines starting with # or ! will be treated as comments and will be ignored by AMQ Streams components. Values can be split into multiple lines by using \ directly before the newline / carriage return. Save the changes Restart the ZooKeeper or Kafka broker Repeat this procedure on all the nodes of the cluster. | [
"sudo groupadd kafka sudo useradd -g kafka kafka sudo passwd kafka",
"sudo mkdir /opt/kafka",
"mkdir /tmp/kafka unzip amq-streams_y.y-x.x.x.zip -d /tmp/kafka",
"sudo mv /tmp/kafka/ kafka_y.y-x.x.x /* /opt/kafka/ rm -r /tmp/kafka",
"sudo chown -R kafka:kafka /opt/kafka",
"sudo mkdir /var/lib/zookeeper sudo chown -R kafka:kafka /var/lib/zookeeper",
"sudo mkdir /var/lib/kafka sudo chown -R kafka:kafka /var/lib/kafka",
"dataDir=/var/lib/zookeeper/",
"log.dirs=/var/lib/kafka/",
"su - kafka",
"/opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"jcmd | grep zookeeper",
"number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep kafka",
"number kafka.Kafka /opt/kafka/config/server.properties",
"/opt/kafka/bin/kafka-console-producer.sh --broker-list <bootstrap-address> --topic <topic-name>",
"/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic",
">message 1 >message 2 >message 3 >message 4",
"WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)",
"/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning",
"/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning",
"su - kafka /opt/kafka/bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"su - kafka /opt/kafka/bin/zookeeper-server-stop.sh",
"<option> = <value>",
"This is a comment",
"sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/assembly-getting-started-str |
4.2.5. Removing Physical Volumes | 4.2.5. Removing Physical Volumes If a device is no longer required for use by LVM, you can remove the LVM label with the pvremove command. Executing the pvremove command zeroes the LVM metadata on an empty physical volume. If the physical volume you want to remove is currently part of a volume group, you must remove it from the volume group with the vgreduce command, as described in Section 4.3.5, "Removing Physical Volumes from a Volume Group" . | [
"pvremove /dev/ram15 Labels on physical volume \"/dev/ram15\" successfully wiped"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/pv_remove |
4.5. Logging | 4.5. Logging All message output passes through a logging module with independent choices of logging levels for: standard output/error syslog log file external log function The logging levels are set in the /etc/lvm/lvm.conf file, which is described in Appendix B, The LVM Configuration Files . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/logging |
Chapter 5. Remote health monitoring | Chapter 5. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 5.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 5.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/monitoring_openshift_data_foundation/remote_health_monitoring |
Chapter 4. Configuring user workload monitoring | Chapter 4. Configuring user workload monitoring 4.1. Preparing to configure the user workload monitoring stack This section explains which user-defined monitoring components can be configured, how to enable user workload monitoring, and how to prepare for configuring the user workload monitoring stack. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in the Config map reference for the Cluster Monitoring Operator are supported for configuration. The monitoring stack imposes additional resource requirements. Consult the computing resources recommendations in Scaling the Cluster Monitoring Operator and verify that you have sufficient resources. 4.1.1. Configurable monitoring components This table shows the monitoring components you can configure and the keys used to specify the components in the user-workload-monitoring-config config map. Table 4.1. Configurable monitoring components for user-defined projects Component user-workload-monitoring-config config map key Prometheus Operator prometheusOperator Prometheus prometheus Alertmanager alertmanager Thanos Ruler thanosRuler Warning Different configuration changes to the ConfigMap object result in different outcomes: The pods are not redeployed. Therefore, there is no service outage. The affected pods are redeployed: For single-node clusters, this results in temporary service outage. For multi-node clusters, because of high-availability, the affected pods are gradually rolled out and the monitoring stack remains available. Configuring and resizing a persistent volume always results in a service outage, regardless of high availability. Each procedure that requires a change in the config map includes its expected outcome. 4.1.2. Enabling monitoring for user-defined projects In OpenShift Container Platform, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 4.1.2.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important You must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin cluster role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It might sometimes take a while for these components to redeploy. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Note If you enable monitoring for user-defined projects, the user-workload-monitoring-config ConfigMap object is created by default. Verify that the prometheus-operator , prometheus-user-workload , and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources User workload monitoring first steps 4.1.2.2. Granting users permission to configure monitoring for user-defined projects As a cluster administrator, you can assign the user-workload-monitoring-config-edit role to a user. This grants permission to configure and manage monitoring for user-defined projects without giving them permission to configure and manage core OpenShift Container Platform monitoring components. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring Verify that the user is correctly assigned to the user-workload-monitoring-config-edit role by displaying the related role binding: USD oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring Example command USD oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring Example output Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1 1 In this example, user1 is assigned to the user-workload-monitoring-config-edit role. 4.1.3. Enabling alert routing for user-defined projects In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects. This process consists of the following steps: Enable alert routing for user-defined projects: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Grant users permission to configure alert routing for user-defined projects. After you complete these steps, developers and other users can configure custom alerts and alert routing for their user-defined projects. Additional resources Understanding alert routing for user-defined projects 4.1.3.1. Enabling the platform Alertmanager instance for user-defined alert routing You can allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserAlertmanagerConfig: true in the alertmanagerMain section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # ... alertmanagerMain: enableUserAlertmanagerConfig: true 1 # ... 1 Set the enableUserAlertmanagerConfig value to true to allow users to create user-defined alert routing configurations that use the main platform instance of Alertmanager. Save the file to apply the changes. The new configuration is applied automatically. 4.1.3.2. Enabling a separate Alertmanager instance for user-defined alert routing In some clusters, you might want to deploy a dedicated Alertmanager instance for user-defined projects, which can help reduce the load on the default platform Alertmanager instance and can better separate user-defined alerts from default platform alerts. In these cases, you can optionally enable a separate instance of Alertmanager to send alerts for user-defined projects only. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add enabled: true and enableAlertmanagerConfig: true in the alertmanager section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2 1 Set the enabled value to true to enable a dedicated instance of the Alertmanager for user-defined projects in a cluster. Set the value to false or omit the key entirely to disable the Alertmanager for user-defined projects. If you set this value to false or if the key is omitted, user-defined alerts are routed to the default platform Alertmanager instance. 2 Set the enableAlertmanagerConfig value to true to enable users to define their own alert routing configurations with AlertmanagerConfig objects. Save the file to apply the changes. The dedicated instance of Alertmanager for user-defined projects starts automatically. Verification Verify that the user-workload Alertmanager instance has started: # oc -n openshift-user-workload-monitoring get alertmanager Example output NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s 4.1.3.3. Granting users permission to configure alert routing for user-defined projects You can grant users permission to configure alert routing for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled monitoring for user-defined projects. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the alert-routing-edit cluster role to a user in the user-defined project: USD oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1 1 For <namespace> , substitute the namespace for the user-defined project, such as ns1 . For <user> , substitute the username for the account to which you want to assign the role. Additional resources Configuring alert notifications 4.1.4. Granting users permissions for monitoring for user-defined projects As a cluster administrator, you can monitor all core OpenShift Container Platform and user-defined projects. You can also grant developers and other users different permissions: Monitoring user-defined projects Configuring the components that monitor user-defined projects Configuring alert routing for user-defined projects Managing alerts and silences for user-defined projects You can grant the permissions by assigning one of the following monitoring roles or cluster roles: Table 4.2. Monitoring roles Role name Description Project user-workload-monitoring-config-edit Users with this role can edit the user-workload-monitoring-config ConfigMap object to configure Prometheus, Prometheus Operator, Alertmanager, and Thanos Ruler for user-defined workload monitoring. openshift-user-workload-monitoring monitoring-alertmanager-api-reader Users with this role have read access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring monitoring-alertmanager-api-writer Users with this role have read and write access to the user-defined Alertmanager API for all projects, if the user-defined Alertmanager is enabled. openshift-user-workload-monitoring Table 4.3. Monitoring cluster roles Cluster role name Description Project monitoring-rules-view Users with this cluster role have read access to PrometheusRule custom resources (CRs) for user-defined projects. They can also view the alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-rules-edit Users with this cluster role can create, modify, and delete PrometheusRule CRs for user-defined projects. They can also manage alerts and silences in the Developer perspective of the OpenShift Container Platform web console. Can be bound with RoleBinding to any user project. monitoring-edit Users with this cluster role have the same privileges as users with the monitoring-rules-edit cluster role. Additionally, users can create, read, modify, and delete ServiceMonitor and PodMonitor resources to scrape metrics from services and pods. Can be bound with RoleBinding to any user project. alert-routing-edit Users with this cluster role can create, update, and delete AlertmanagerConfig CRs for user-defined projects. Can be bound with RoleBinding to any user project. Additional resources CMO services resources Granting users permission to configure monitoring for user-defined projects Granting users permission to configure alert routing for user-defined projects 4.1.4.1. Granting user permissions by using the web console You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to User Management RoleBindings Create binding . In the Binding Type section, select the Namespace Role Binding type. In the Name field, enter a name for the role binding. In the Namespace field, select the project where you want to grant the access. Important The monitoring role or cluster role permissions that you grant to a user by using this procedure apply only to the project that you select in the Namespace field. Select a monitoring role or cluster role from the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 4.1.4.2. Granting user permissions by using the CLI You can grant users permissions for the openshift-monitoring project or their own projects, by using the OpenShift CLI ( oc ). Important Whichever role or cluster role you choose, you must bind it against a specific project as a cluster administrator. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure To assign a monitoring role to a user for a project, enter the following command: USD oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1 1 Substitute <role> with the wanted monitoring role, <user> with the user to whom you want to assign the role, and <namespace> with the project where you want to grant the access. To assign a monitoring cluster role to a user for a project, enter the following command: USD oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1 1 Substitute <cluster-role> with the wanted monitoring cluster role, <user> with the user to whom you want to assign the cluster role, and <namespace> with the project where you want to grant the access. 4.1.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 4.1.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 4.2. Configuring performance and scalability for user workload monitoring You can configure the monitoring stack to optimize the performance and scale of your clusters. The following documentation provides information about how to distribute the monitoring components and control the impact of the monitoring stack on CPU and memory resources. 4.2.1. Controlling the placement and distribution of monitoring components You can move the monitoring stack components to specific nodes: Use the nodeSelector constraint with labeled nodes to move any of the monitoring stack components to specific nodes. Assign tolerations to enable moving components to tainted nodes. By doing so, you control the placement and distribution of the monitoring components across a cluster. By controlling placement and distribution of monitoring components, you can optimize system resource use, improve performance, and separate workloads based on specific requirements or policies. Additional resources Using node selectors to move monitoring components 4.2.1.1. Moving monitoring components to different nodes You can move any of the components that monitor workloads for user-defined projects to specific worker nodes. Warning It is not permitted to move components to control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure If you have not done so yet, add a label to the nodes on which you want to run the monitoring components: USD oc label nodes <node_name> <node_label> 1 1 Replace <node_name> with the name of the node where you want to add the label. Replace <node_label> with the name of the wanted label. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify the node labels for the nodeSelector constraint for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # ... <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 # ... 1 Substitute <component> with the appropriate monitoring stack component name. 2 Substitute <node_label_1> with the label you added to the node. 3 Optional: Specify additional labels. If you specify additional labels, the pods for the component are only scheduled on the nodes that contain all of the specified labels. Note If monitoring components remain in a Pending state after configuring the nodeSelector constraint, check the pod events for errors relating to taints and tolerations. Save the file to apply the changes. The components specified in the new configuration are automatically moved to the new nodes, and the pods affected by the new configuration are redeployed. Additional resources Enabling monitoring for user-defined projects Understanding how to update labels on nodes Placing pods on specific nodes using node selectors nodeSelector (Kubernetes documentation) 4.2.1.2. Assigning tolerations to monitoring components You can assign tolerations to the components that monitor user-defined projects, to enable moving them to tainted worker nodes. Scheduling is not permitted on control plane or infrastructure nodes. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Specify tolerations for the component: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification> Substitute <component> and <toleration_specification> accordingly. For example, oc adm taint nodes node1 key1=value1:NoSchedule adds a taint to node1 with the key key1 and the value value1 . This prevents monitoring components from deploying pods on node1 unless a toleration is configured for that taint. The following example configures the thanosRuler component to tolerate the example taint: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects Controlling pod placement using node taints Taints and Tolerations (Kubernetes documentation) 4.2.2. Managing CPU and memory resources for monitoring components You can ensure that the containers that run monitoring components have enough CPU and memory resources by specifying values for resource limits and requests for those components. You can configure these limits and requests for monitoring components that monitor user-defined projects in the openshift-user-workload-monitoring namespace. 4.2.2.1. Specifying limits and requests To configure CPU and memory resources, specify values for resource limits and requests in the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add values to define resource limits and requests for each component you want to configure. Important Ensure that the value set for a limit is always higher than the value set for a request. Otherwise, an error will occur, and the container will not run. Example of setting resource limits and requests apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About specifying limits and requests for monitoring components Kubernetes requests and limits documentation (Kubernetes documentation) 4.2.3. Controlling the impact of unbound metrics attributes in user-defined projects Cluster administrators can use the following measures to control the impact of unbound metrics attributes in user-defined projects: Limit the number of samples that can be accepted per target scrape in user-defined projects Limit the number of scraped labels, the length of label names, and the length of label values Create alerts that fire when a scrape sample threshold is reached or when the target cannot be scraped Note Limiting scrape samples can help prevent the issues caused by adding many unbound attributes to labels. Developers can also prevent the underlying cause by limiting the number of unbound attributes that they define for metrics. Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Additional resources Controlling the impact of unbound metrics attributes in user-defined projects Enabling monitoring for user-defined projects Determining why Prometheus is consuming a lot of disk space 4.2.3.1. Setting scrape sample and label limits for user-defined projects You can limit the number of samples that can be accepted per target scrape in user-defined projects. You can also limit the number of scraped labels, the length of label names, and the length of label values. Warning If you set sample or label limits, no further sample data is ingested for that target scrape after the limit is reached. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the enforcedSampleLimit configuration to data/config.yaml to limit the number of samples that can be accepted per target scrape in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1 1 A value is required if this parameter is specified. This enforcedSampleLimit example limits the number of samples that can be accepted per target scrape in user-defined projects to 50,000. Add the enforcedLabelLimit , enforcedLabelNameLengthLimit , and enforcedLabelValueLengthLimit configurations to data/config.yaml to limit the number of scraped labels, the length of label names, and the length of label values in user-defined projects: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3 1 Specifies the maximum number of labels per scrape. The default value is 0 , which specifies no limit. 2 Specifies the maximum length in characters of a label name. The default value is 0 , which specifies no limit. 3 Specifies the maximum length in characters of a label value. The default value is 0 , which specifies no limit. Save the file to apply the changes. The limits are applied automatically. 4.2.3.2. Creating scrape sample alerts You can create alerts that notify you when: The target cannot be scraped or is not available for the specified for duration A scrape sample threshold is reached or is exceeded for the specified for duration Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have limited the number of samples that can be accepted per target scrape in user-defined projects, by using enforcedSampleLimit . You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file with alerts that inform you when the targets are down and when the enforced sample limit is approaching. The file in this example is called monitoring-stack-alerts.yaml : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf "%.4g" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9 for: 10m 10 labels: severity: warning 11 1 Defines the name of the alerting rule. 2 Specifies the user-defined project where the alerting rule is deployed. 3 The TargetDown alert fires if the target cannot be scraped and is not available for the for duration. 4 The message that is displayed when the TargetDown alert fires. 5 The conditions for the TargetDown alert must be true for this duration before the alert is fired. 6 Defines the severity for the TargetDown alert. 7 The ApproachingEnforcedSamplesLimit alert fires when the defined scrape sample threshold is exceeded and lasts for the specified for duration. 8 The message that is displayed when the ApproachingEnforcedSamplesLimit alert fires. 9 The threshold for the ApproachingEnforcedSamplesLimit alert. In this example, the alert fires when the number of ingested samples exceeds 90% of the configured limit. 10 The conditions for the ApproachingEnforcedSamplesLimit alert must be true for this duration before the alert is fired. 11 Defines the severity for the ApproachingEnforcedSamplesLimit alert. Apply the configuration to the user-defined project: USD oc apply -f monitoring-stack-alerts.yaml Additionally, you can check if a target has hit the configured limit: In the Administrator perspective of the web console, go to Observe Targets and select an endpoint with a Down status that you want to check. The Scrape failed: sample limit exceeded message is displayed if the endpoint failed because of an exceeded sample limit. 4.2.4. Configuring pod topology spread constraints You can configure pod topology spread constraints for all the pods for user-defined monitoring to control how pod replicas are scheduled to nodes across zones. This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You can configure pod topology spread constraints for monitoring pods by using the user-workload-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the following settings under the data/config.yaml field to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option> 1 Specify a name of the component for which you want to set up pod topology spread constraints. 2 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. 3 Specify a key of node labels for topologyKey . Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler tries to put a balanced number of pods into each domain. 4 Specify a value for whenUnsatisfiable . Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 5 Specify labelSelector to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Example configuration for Thanos Ruler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources About pod topology spread constraints for monitoring Controlling pod placement by using pod topology spread constraints Pod Topology Spread Constraints (Kubernetes documentation) 4.3. Storing and recording data for user workload monitoring Store and record your metrics and alerting data, configure logs to specify which activities are recorded, control how long Prometheus retains stored data, and set the maximum amount of disk space for the data. These actions help you protect your data and use them for troubleshooting. 4.3.1. Configuring persistent storage Run cluster monitoring with persistent storage to gain the following benefits: Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated. Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted. For production environments, it is highly recommended to configure persistent storage. Important In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability. 4.3.1.1. Persistent storage prerequisites Dedicate sufficient persistent storage to ensure that the disk does not become full. Use Filesystem as the storage type value for the volumeMode parameter when you configure the persistent volume. Important Do not use a raw block volume, which is described with volumeMode: Block in the PersistentVolume resource. Prometheus cannot use raw block volumes. Prometheus does not support file systems that are not POSIX compliant. For example, some NFS file system implementations are not POSIX compliant. If you want to use an NFS file system for storage, verify with the vendor that their NFS implementation is fully POSIX compliant. 4.3.1.2. Configuring a persistent volume claim To use a persistent volume (PV) for monitoring components, you must configure a persistent volume claim (PVC). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add your PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3 1 Specify the monitoring component for which you want to configure the PVC. 2 Specify an existing storage class. If a storage class is not specified, the default storage class is used. 3 Specify the amount of required storage. The following example configures a PVC that claims persistent storage for Thanos Ruler: Example PVC configuration apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed and the new storage configuration is applied. Warning When you update the config map with a PVC configuration, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Understanding persistent storage PersistentVolumeClaims (Kubernetes documentation) 4.3.1.3. Resizing a persistent volume You can resize a persistent volume (PV) for the instances of Prometheus, Thanos Ruler, and Alertmanager. You need to manually expand a persistent volume claim (PVC), and then update the config map in which the component is configured. Important You can only expand the size of the PVC. Shrinking the storage size is not possible. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have configured at least one PVC for components that monitor user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Manually expand a PVC with the updated storage request. For more information, see "Expanding persistent volume claims (PVCs) with a file system" in Expanding persistent volumes . Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a new storage size for the PVC configuration for the component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2 1 The component for which you want to change the storage size. 2 Specify the new size for the storage volume. It must be greater than the value. The following example sets the new PVC request to 20 gigabytes for Thanos Ruler: Example storage configuration for thanosRuler apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi Note Storage requirements for the thanosRuler component depend on the number of rules that are evaluated and how many samples each rule generates. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Warning When you update the config map with a new storage size, the affected StatefulSet object is recreated, resulting in a temporary service outage. Additional resources Prometheus database storage requirements Expanding persistent volume claims (PVCs) with a file system 4.3.2. Modifying retention time and size for Prometheus metrics data By default, Prometheus retains metrics data for 24 hours for monitoring for user-defined projects. You can modify the retention time for the Prometheus instance to change when the data is deleted. You can also set the maximum amount of disk space the retained metrics data uses. Note Data compaction occurs every two hours. Therefore, a persistent volume (PV) might fill up before compaction, potentially exceeding the retentionSize limit. In such cases, the KubePersistentVolumeFillingUp alert fires until the space on a PV is lower than the retentionSize limit. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time and size configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2 1 The retention time: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . 2 The retention size: a number directly followed by B (bytes), KB (kilobytes), MB (megabytes), GB (gigabytes), TB (terabytes), PB (petabytes), and EB (exabytes). The following example sets the retention time to 24 hours and the retention size to 10 gigabytes for the Prometheus instance: Example of setting retention time for Prometheus apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.3.2.1. Modifying the retention time for Thanos Ruler metrics data By default, for user-defined projects, Thanos Ruler automatically retains metrics data for 24 hours. You can modify the retention time to change how long this data is retained by specifying a time value in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the retention time configuration under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1 1 Specify the retention time in the following format: a number directly followed by ms (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), or y (years). You can also combine time values for specific times, such as 1h30m15s . The default is 24h . The following example sets the retention time to 10 days for Thanos Ruler data: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Retention time and size for Prometheus metrics Enabling monitoring for user-defined projects Prometheus database storage requirements Recommended configurable storage technology Understanding persistent storage Optimizing storage 4.3.3. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, and Thanos Ruler. The following log levels can be applied to the relevant component in the user-workload-monitoring-config ConfigMap object: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. Available component values are prometheus , alertmanager , prometheusOperator , and thanosRuler . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level for the prometheus-operator deployment: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 4.3.4. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add the queryLogFile parameter for Prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 Add the full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods: USD oc -n openshift-user-workload-monitoring get pods Example output ... prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m ... Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources Enabling monitoring for user-defined projects 4.4. Configuring metrics for user workload monitoring Configure the collection of metrics to monitor how cluster components and your own workloads are performing. You can send ingested metrics to remote systems for long-term storage and add cluster ID labels to the metrics to identify the data coming from different clusters. Additional resources Understanding metrics 4.4.1. Configuring remote write storage You can configure remote write storage to enable Prometheus to send ingested metrics to remote systems for long-term storage. Doing so has no impact on how or for how long Prometheus stores metrics. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have set up a remote write compatible endpoint (such as Thanos) and know the endpoint URL. See the Prometheus remote endpoints and storage documentation for information about endpoints that are compatible with the remote write feature. Important Red Hat only provides information for configuring remote write senders and does not offer guidance on configuring receiver endpoints. Customers are responsible for setting up their own endpoints that are remote-write compatible. Issues with endpoint receiver configurations are not included in Red Hat production support. You have set up authentication credentials in a Secret object for the remote write endpoint. You must create the secret in the openshift-user-workload-monitoring namespace. Warning To reduce security risks, use HTTPS and authentication to send metrics to an endpoint. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a remoteWrite: section under data/config.yaml/prometheus , as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" 1 <endpoint_authentication_credentials> 2 1 The URL of the remote write endpoint. 2 The authentication method and credentials for the endpoint. Currently supported authentication methods are AWS Signature Version 4, authentication using HTTP in an Authorization request header, Basic authentication, OAuth 2.0, and TLS client. See Supported remote write authentication settings for sample configurations of supported authentication methods. Add write relabel configuration values after the authentication credentials: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1 1 Add configuration for metrics that you want to send to the remote endpoint. Example of forwarding a single metric called my_metric apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep Example of forwarding metrics called my_metric_1 and my_metric_2 in my_namespace namespace apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep Save the file to apply the changes. The new configuration is applied automatically. 4.4.1.1. Supported remote write authentication settings You can use different methods to authenticate with a remote write endpoint. Currently supported authentication methods are AWS Signature Version 4, basic authentication, authorization, OAuth 2.0, and TLS client. The following table provides details about supported authentication methods for use with remote write. Authentication method Config map field Description AWS Signature Version 4 sigv4 This method uses AWS Signature Version 4 authentication to sign requests. You cannot use this method simultaneously with authorization, OAuth 2.0, or Basic authentication. Basic authentication basicAuth Basic authentication sets the authorization header on every remote write request with the configured username and password. authorization authorization Authorization sets the Authorization header on every remote write request using the configured token. OAuth 2.0 oauth2 An OAuth 2.0 configuration uses the client credentials grant type. Prometheus fetches an access token from tokenUrl with the specified client ID and client secret to access the remote write endpoint. You cannot use this method simultaneously with authorization, AWS Signature Version 4, or Basic authentication. TLS client tlsConfig A TLS client configuration specifies the CA certificate, the client certificate, and the client key file information used to authenticate with the remote write endpoint server using TLS. The sample configuration assumes that you have already created a CA certificate file, a client certificate file, and a client key file. 4.4.1.2. Example remote write authentication settings The following samples show different authentication settings you can use to connect to a remote write endpoint. Each sample also shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. Each sample configures authentication for use with monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. 4.4.1.2.1. Sample YAML for AWS Signature Version 4 authentication The following shows the settings for a sigv4 secret named sigv4-credentials in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque 1 The AWS API access key. 2 The AWS API secret key. The following shows sample AWS Signature Version 4 remote write authentication settings that use a Secret object named sigv4-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7 1 The AWS region. 2 4 The name of the Secret object containing the AWS API access credentials. 3 The key that contains the AWS API access key in the specified Secret object. 5 The key that contains the AWS API secret key in the specified Secret object. 6 The name of the AWS profile that is being used to authenticate. 7 The unique identifier for the Amazon Resource Name (ARN) assigned to your role. 4.4.1.2.2. Sample YAML for Basic authentication The following shows sample Basic authentication settings for a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque 1 The username. 2 The password. The following sample shows a basicAuth remote write configuration that uses a Secret object named rw-basic-auth in the openshift-user-workload-monitoring namespace. It assumes that you have already set up authentication credentials for the endpoint. apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://basicauth.example.com/api/write" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4 1 3 The name of the Secret object that contains the authentication credentials. 2 The key that contains the username in the specified Secret object. 4 The key that contains the password in the specified Secret object. 4.4.1.2.3. Sample YAML for authentication with a bearer token using a Secret Object The following shows bearer token settings for a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque 1 The authentication token. The following shows sample bearer token config map settings that use a Secret object named rw-bearer-auth in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: "https://authorization.example.com/api/write" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3 1 The authentication type of the request. The default value is Bearer . 2 The name of the Secret object that contains the authentication credentials. 3 The key that contains the authentication token in the specified Secret object. 4.4.1.2.4. Sample YAML for OAuth 2.0 authentication The following shows sample OAuth 2.0 settings for a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque 1 The Oauth 2.0 ID. 2 The OAuth 2.0 secret. The following shows an oauth2 remote write authentication sample configuration that uses a Secret object named oauth2-credentials in the openshift-user-workload-monitoring namespace: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://test.example.com/api/write" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2> 1 3 The name of the corresponding Secret object. Note that ClientId can alternatively refer to a ConfigMap object, although clientSecret must refer to a Secret object. 2 4 The key that contains the OAuth 2.0 credentials in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . 6 The OAuth 2.0 scopes for the authorization request. These scopes limit what data the tokens can access. 7 The OAuth 2.0 authorization request parameters required for the authorization server. 4.4.1.2.5. Sample YAML for TLS client authentication The following shows sample TLS client settings for a tls Secret object named mtls-bundle in the openshift-user-workload-monitoring namespace. apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls 1 The CA certificate in the Prometheus container with which to validate the server certificate. 2 The client certificate for authentication with the server. 3 The client key. The following sample shows a tlsConfig remote write authentication configuration that uses a TLS Secret object named mtls-bundle . apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6 1 3 5 The name of the corresponding Secret object that contains the TLS authentication credentials. Note that ca and cert can alternatively refer to a ConfigMap object, though keySecret must refer to a Secret object. 2 The key in the specified Secret object that contains the CA certificate for the endpoint. 4 The key in the specified Secret object that contains the client certificate for the endpoint. 6 The key in the specified Secret object that contains the client key secret. 4.4.1.3. Example remote write queue configuration You can use the queueConfig object for remote write to tune the remote write queue parameters. The following example shows the queue parameters with their default values for monitoring for user-defined projects in the openshift-user-workload-monitoring namespace. Example configuration of remote write parameters with default values apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8 1 The number of samples to buffer per shard before they are dropped from the queue. 2 The minimum number of shards. 3 The maximum number of shards. 4 The maximum number of samples per send. 5 The maximum time for a sample to wait in buffer. 6 The initial time to wait before retrying a failed request. The time gets doubled for every retry up to the maxbackoff time. 7 The maximum time to wait before retrying a failed request. 8 Set this parameter to true to retry a request after receiving a 429 status code from the remote write storage. Additional resources Prometheus REST API reference for remote write Setting up remote write compatible endpoints (Prometheus documentation) Tuning remote write settings (Prometheus documentation) Understanding secrets 4.4.2. Creating cluster ID labels for metrics You can create cluster ID labels for metrics by adding the write_relabel settings for remote write storage in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. Note When Prometheus scrapes user workload targets that expose a namespace label, the system stores this label as exported_namespace . This behavior ensures that the final namespace label value is equal to the namespace of the target pod. You cannot override this default configuration by setting the value of the honorLabels field to true for PodMonitor or ServiceMonitor objects. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). You have configured remote write storage. Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config In the writeRelabelConfigs: section under data/config.yaml/prometheus/remoteWrite , add cluster ID relabel configuration values: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2 1 Add a list of write relabel configurations for metrics that you want to send to the remote endpoint. 2 Substitute the label configuration for the metrics sent to the remote write endpoint. The following sample shows how to forward a metric with the cluster ID label cluster_id : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: "https://remote-write-endpoint.example.com" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3 1 The system initially applies a temporary cluster ID source label named __tmp_openshift_cluster_id__ . This temporary label gets replaced by the cluster ID label name that you specify. 2 Specify the name of the cluster ID label for metrics sent to remote write storage. If you use a label name that already exists for a metric, that value is overwritten with the name of this cluster ID label. For the label name, do not use __tmp_openshift_cluster_id__ . The final relabeling step removes labels that use this name. 3 The replace write relabel action replaces the temporary label with the target label for outgoing metrics. This action is the default and is applied if no action is specified. Save the file to apply the changes. The new configuration is applied automatically. Additional resources Adding cluster ID labels to metrics Obtaining your cluster ID 4.4.3. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 4.4.3.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with administrative permissions for the namespace. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 4.4.3.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or the monitoring-edit cluster role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a new YAML configuration file named example-app-service-monitor.yaml . Add a ServiceMonitor resource to the YAML file. The following example creates a service monitor named prometheus-example-monitor to scrape metrics exposed by the prometheus-example-app service in the ns1 namespace: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app 1 Specify a user-defined namespace where your service runs. 2 Specify endpoint ports to be scraped by Prometheus. 3 Configure a selector to match your service based on its metadata labels. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. Verify that the ServiceMonitor resource is running: USD oc -n <namespace> get servicemonitor Example output NAME AGE prometheus-example-monitor 81m 4.4.3.3. Example service endpoint authentication settings You can configure authentication for service endpoints for user-defined project monitoring by using ServiceMonitor and PodMonitor custom resource definitions (CRDs). The following samples show different authentication settings for a ServiceMonitor resource. Each sample shows how to configure a corresponding Secret object that contains authentication credentials and other relevant settings. 4.4.3.3.1. Sample YAML authentication with a bearer token The following sample shows bearer token settings for a Secret object named example-bearer-auth in the ns1 namespace: Example bearer token secret apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1 1 Specify an authentication token. The following sample shows bearer token authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-bearer-auth : Example bearer token authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the authentication token in the specified Secret object. 2 The name of the Secret object that contains the authentication credentials. Important Do not use bearerTokenFile to configure bearer token. If you use the bearerTokenFile configuration, the ServiceMonitor resource is rejected. 4.4.3.3.2. Sample YAML for Basic authentication The following sample shows Basic authentication settings for a Secret object named example-basic-auth in the ns1 namespace: Example Basic authentication secret apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2 1 Specify a username for authentication. 2 Specify a password for authentication. The following sample shows Basic authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-basic-auth : Example Basic authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the username in the specified Secret object. 2 4 The name of the Secret object that contains the Basic authentication. 3 The key that contains the password in the specified Secret object. 4.4.3.3.3. Sample YAML authentication with OAuth 2.0 The following sample shows OAuth 2.0 settings for a Secret object named example-oauth2 in the ns1 namespace: Example OAuth 2.0 secret apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 1 Specify an Oauth 2.0 ID. 2 Specify an Oauth 2.0 secret. The following sample shows OAuth 2.0 authentication settings for a ServiceMonitor CRD. The example uses a Secret object named example-oauth2 : Example OAuth 2.0 authentication settings apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app 1 The key that contains the OAuth 2.0 ID in the specified Secret object. 2 4 The name of the Secret object that contains the OAuth 2.0 credentials. 3 The key that contains the OAuth 2.0 secret in the specified Secret object. 5 The URL used to fetch a token with the specified clientId and clientSecret . Additional resources Enabling monitoring for user-defined projects Scrape Prometheus metrics using TLS in ServiceMonitor configuration (Red Hat Customer Portal article) PodMonitor API ServiceMonitor API 4.5. Configuring alerts and notifications for user workload monitoring You can configure a local or external Alertmanager instance to route alerts from Prometheus to endpoint receivers. You can also attach custom labels to all time series and alerts to add useful metadata information. 4.5.1. Configuring external Alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances to route alerts for user-defined projects. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add an additionalAlertmanagerConfigs section with configuration details under data/config.yaml/<component> : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2 2 Substitute <alertmanager_specification> with authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). 1 Substitute <component> for one of two supported external Alertmanager components: prometheus or thanosRuler . The following sample config map configures an additional Alertmanager for Thanos Ruler by using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 4.5.2. Configuring secrets for Alertmanager The OpenShift Container Platform monitoring stack includes Alertmanager, which routes alerts from Prometheus to endpoint receivers. If you need to authenticate with a receiver so that Alertmanager can send alerts to it, you can configure Alertmanager to use a secret that contains authentication credentials for the receiver. For example, you can configure Alertmanager to use a secret to authenticate with an endpoint receiver that requires a certificate issued by a private Certificate Authority (CA). You can also configure Alertmanager to use a secret to authenticate with a receiver that requires a password file for Basic HTTP authentication. In either case, authentication details are contained in the Secret object rather than in the ConfigMap object. 4.5.2.1. Adding a secret to the Alertmanager configuration You can add secrets to the Alertmanager configuration by editing the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project. After you add a secret to the config map, the secret is mounted as a volume at /etc/alertmanager/secrets/<secret_name> within the alertmanager container for the Alertmanager pods. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have created the secret to be configured in Alertmanager in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a secrets: section under data/config.yaml/alertmanager with the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2> 1 This section contains the secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. 2 The name of the Secret object that contains authentication credentials for the receiver. If you add multiple secrets, place each one on a new line. The following sample config map settings configure Alertmanager to use two Secret objects named test-secret-basic-auth and test-secret-api-token : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token Save the file to apply the changes. The new configuration is applied automatically. 4.5.3. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources Enabling monitoring for user-defined projects 4.5.4. Configuring alert notifications In OpenShift Container Platform, an administrator can enable alert routing for user-defined projects with one of the following methods: Use the default platform Alertmanager instance. Use a separate Alertmanager instance only for user-defined projects. Developers and other users with the alert-routing-edit cluster role can configure custom alert notifications for their user-defined projects by configuring alert receivers. Note Review the following limitations of alert routing for user-defined projects: User-defined alert routing is scoped to the namespace in which the resource is defined. For example, a routing configuration in namespace ns1 only applies to PrometheusRules resources in the same namespace. When a namespace is excluded from user-defined monitoring, AlertmanagerConfig resources in the namespace cease to be part of the Alertmanager configuration. Additional resources Understanding alert routing for user-defined projects Sending notifications to external systems PagerDuty (PagerDuty official site) Prometheus Integration Guide (PagerDuty official site) Support version matrix for monitoring components Enabling alert routing for user-defined projects 4.5.4.1. Configuring alert routing for user-defined projects If you are a non-administrator user who has been given the alert-routing-edit cluster role, you can create or edit alert routing for user-defined projects. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. A cluster administrator has enabled alert routing for user-defined projects. You are logged in as a user that has the alert-routing-edit cluster role for the project for which you want to create alert routing. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alert routing. The example in this procedure uses a file called example-app-alert-routing.yaml . Add an AlertmanagerConfig YAML definition to the file. For example: apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post Save the file. Apply the resource to the cluster: USD oc apply -f example-app-alert-routing.yaml The configuration is automatically applied to the Alertmanager pods. 4.5.4.2. Configuring alert routing for user-defined projects with the Alertmanager secret If you have enabled a separate instance of Alertmanager that is dedicated to user-defined alert routing, you can customize where and how the instance sends notifications by editing the alertmanager-user-workload secret in the openshift-user-workload-monitoring namespace. Note All features of a supported version of upstream Alertmanager are also supported in an OpenShift Container Platform Alertmanager configuration. To check all the configuration options of a supported version of upstream Alertmanager, see Alertmanager configuration (Prometheus documentation). Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have enabled a separate instance of Alertmanager for user-defined alert routing. You have installed the OpenShift CLI ( oc ). Procedure Print the currently active Alertmanager configuration into the file alertmanager.yaml : USD oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : route: receiver: Default group_by: - name: Default routes: - matchers: - "service = prometheus-example-monitor" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3 1 Specify labels to match your alerts. This example targets all alerts that have the service="prometheus-example-monitor" label. 2 Specify the name of the receiver to use for the alerts group. 3 Specify the receiver configuration. Apply the new configuration in the file: USD oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=- 4.5.4.3. Configuring different alert receivers for default platform alerts and user-defined alerts You can configure different alert receivers for default platform alerts and user-defined alerts to ensure the following results: All default platform alerts are sent to a receiver owned by the team in charge of these alerts. All user-defined alerts are sent to another receiver so that the team can focus only on platform alerts. You can achieve this by using the openshift_io_alert_source="platform" label that is added by the Cluster Monitoring Operator to all platform alerts: Use the openshift_io_alert_source="platform" matcher to match default platform alerts. Use the openshift_io_alert_source!="platform" or 'openshift_io_alert_source=""' matcher to match user-defined alerts. Note This configuration does not apply if you have enabled a separate instance of Alertmanager dedicated to user-defined alerts. | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1",
"oc -n openshift-user-workload-monitoring get pod",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h",
"oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring",
"oc describe rolebinding <role_binding_name> -n openshift-user-workload-monitoring",
"oc describe rolebinding user-workload-monitoring-config-edit -n openshift-user-workload-monitoring",
"Name: user-workload-monitoring-config-edit Labels: <none> Annotations: <none> Role: Kind: Role Name: user-workload-monitoring-config-edit Subjects: Kind Name Namespace ---- ---- --------- User user1 1",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | # alertmanagerMain: enableUserAlertmanagerConfig: true 1 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: enabled: true 1 enableAlertmanagerConfig: true 2",
"oc -n openshift-user-workload-monitoring get alertmanager",
"NAME VERSION REPLICAS AGE user-workload 0.24.0 2 100s",
"oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1",
"oc adm policy add-role-to-user <role> <user> -n <namespace> --role-namespace <namespace> 1",
"oc adm policy add-cluster-role-to-user <cluster-role> <user> -n <namespace> 1",
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false",
"oc -n openshift-user-workload-monitoring get pod",
"No resources found in openshift-user-workload-monitoring project.",
"oc label nodes <node_name> <node_label> 1",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | # <component>: 1 nodeSelector: <node_label_1> 2 <node_label_2> 3 #",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi prometheus: resources: limits: cpu: 500m memory: 3Gi requests: cpu: 200m memory: 500Mi thanosRuler: resources: limits: cpu: 500m memory: 1Gi requests: cpu: 200m memory: 500Mi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedLabelLimit: 500 1 enforcedLabelNameLengthLimit: 50 2 enforcedLabelValueLengthLimit: 600 3",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: (scrape_samples_post_metric_relabeling / (scrape_sample_limit > 0)) > 0.9 9 for: 10m 10 labels: severity: warning 11",
"oc apply -f monitoring-stack-alerts.yaml",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 topologySpreadConstraints: - maxSkew: <n> 2 topologyKey: <key> 3 whenUnsatisfiable: <value> 4 labelSelector: 5 <match_option>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 topologyKey: monitoring whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app.kubernetes.io/name: thanos-ruler",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: my-storage-class resources: requests: storage: 10Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: resources: requests: storage: <amount_of_storage> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: resources: requests: storage: 20Gi",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification> 1 retentionSize: <size_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h retentionSize: 10GB",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" 1 <endpoint_authentication_credentials> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: - <your_write_relabel_configs> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: [__name__,namespace] regex: '(my_metric_1|my_metric_2);my_namespace' action: keep",
"apiVersion: v1 kind: Secret metadata: name: sigv4-credentials namespace: openshift-user-workload-monitoring stringData: accessKey: <AWS_access_key> 1 secretKey: <AWS_secret_key> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" sigv4: region: <AWS_region> 1 accessKey: name: sigv4-credentials 2 key: accessKey 3 secretKey: name: sigv4-credentials 4 key: secretKey 5 profile: <AWS_profile_name> 6 roleArn: <AWS_role_arn> 7",
"apiVersion: v1 kind: Secret metadata: name: rw-basic-auth namespace: openshift-user-workload-monitoring stringData: user: <basic_username> 1 password: <basic_password> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://basicauth.example.com/api/write\" basicAuth: username: name: rw-basic-auth 1 key: user 2 password: name: rw-basic-auth 3 key: password 4",
"apiVersion: v1 kind: Secret metadata: name: rw-bearer-auth namespace: openshift-user-workload-monitoring stringData: token: <authentication_token> 1 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | enableUserWorkload: true prometheus: remoteWrite: - url: \"https://authorization.example.com/api/write\" authorization: type: Bearer 1 credentials: name: rw-bearer-auth 2 key: token 3",
"apiVersion: v1 kind: Secret metadata: name: oauth2-credentials namespace: openshift-user-workload-monitoring stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2 type: Opaque",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://test.example.com/api/write\" oauth2: clientId: secret: name: oauth2-credentials 1 key: id 2 clientSecret: name: oauth2-credentials 3 key: secret 4 tokenUrl: https://example.com/oauth2/token 5 scopes: 6 - <scope_1> - <scope_2> endpointParams: 7 param1: <parameter_1> param2: <parameter_2>",
"apiVersion: v1 kind: Secret metadata: name: mtls-bundle namespace: openshift-user-workload-monitoring data: ca.crt: <ca_cert> 1 client.crt: <client_cert> 2 client.key: <client_key> 3 type: tls",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" tlsConfig: ca: secret: name: mtls-bundle 1 key: ca.crt 2 cert: secret: name: mtls-bundle 3 key: client.crt 4 keySecret: name: mtls-bundle 5 key: client.key 6",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> queueConfig: capacity: 10000 1 minShards: 1 2 maxShards: 50 3 maxSamplesPerSend: 2000 4 batchSendDeadline: 5s 5 minBackoff: 30ms 6 maxBackoff: 5s 7 retryOnRateLimit: false 8",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" <endpoint_authentication_credentials> writeRelabelConfigs: 1 - <relabel_config> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write-endpoint.example.com\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ 1 targetLabel: cluster_id 2 action: replace 3",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.2 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 1 spec: endpoints: - interval: 30s port: web 2 scheme: http selector: 3 matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n <namespace> get servicemonitor",
"NAME AGE prometheus-example-monitor 81m",
"apiVersion: v1 kind: Secret metadata: name: example-bearer-auth namespace: ns1 stringData: token: <authentication_token> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - authorization: credentials: key: token 1 name: example-bearer-auth 2 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-basic-auth namespace: ns1 stringData: user: <basic_username> 1 password: <basic_password> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - basicAuth: username: key: user 1 name: example-basic-auth 2 password: key: password 3 name: example-basic-auth 4 port: web selector: matchLabels: app: prometheus-example-app",
"apiVersion: v1 kind: Secret metadata: name: example-oauth2 namespace: ns1 stringData: id: <oauth2_id> 1 secret: <oauth2_secret> 2",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - oauth2: clientId: secret: key: id 1 name: example-oauth2 2 clientSecret: key: secret 3 name: example-oauth2 4 tokenUrl: https://example.com/oauth2/token 5 port: web selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 additionalAlertmanagerConfigs: - <alertmanager_specification> 2",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: 1 - <secret_name_1> 2 - <secret_name_2>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | alertmanager: secrets: - test-secret-basic-auth - test-secret-api-token",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod",
"apiVersion: monitoring.coreos.com/v1beta1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post",
"oc apply -f example-app-alert-routing.yaml",
"oc -n openshift-user-workload-monitoring get secret alertmanager-user-workload --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"route: receiver: Default group_by: - name: Default routes: - matchers: - \"service = prometheus-example-monitor\" 1 receiver: <receiver> 2 receivers: - name: Default - name: <receiver> <receiver_configuration> 3",
"oc -n openshift-user-workload-monitoring create secret generic alertmanager-user-workload --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-user-workload-monitoring replace secret --filename=-"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring/configuring-user-workload-monitoring |
Managing security compliance | Managing security compliance Red Hat Satellite 6.15 Plan and configure SCAP compliance policies, deploy the policies to hosts, and monitor compliance of your hosts Red Hat Satellite Documentation Team [email protected] | [
"hammer scap-content list --location \" My_Location \" --organization \" My_Organization \"",
"hammer scap-content bulk-upload --type default",
"rpm2cpio scap-security-guide-0.1.69-3.el8_6.noarch.rpm | cpio -iv --to-stdout ./usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml > ssg-rhel-8.6-ds.xml",
"hammer scap-content bulk-upload --type directory --directory /usr/share/xml/scap/my_content/ --location \" My_Location \" --organization \" My_Organization \"",
"oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream",
"oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml | grep \"WARNING\" WARNING: Datastream component 'scap_org.open-scap_cref_security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2' points out to the remote 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2'. Use '--fetch-remote-resources' option to download it. WARNING: Skipping 'https://access.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2' file which is referenced from datastream",
"oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml Referenced check files: ssg-rhel8-oval.xml system: http://oval.mitre.org/XMLSchema/oval-definitions-5 ssg-rhel8-ocil.xml system: http://scap.nist.gov/schema/ocil/2 security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 system: http://oval.mitre.org/XMLSchema/oval-definitions-5",
"curl -o security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 https://www.redhat.com/security/data/oval/com.redhat.rhsa-RHEL8.xml.bz2",
"curl -o /root/ security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2 http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repo_Label / security-data-oval-com.redhat.rhsa-RHEL8.xml.bz2",
"failed > 5",
"host ~ prod- AND date > \"Jan 1, 2023\"",
"\"1 hour ago\" AND compliance_policy = date = \"1 hour ago\" AND compliance_policy = rhel7_audit",
"xccdf_rule_passed = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions",
"xccdf_rule_failed = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions",
"xccdf_rule_othered = xccdf_org.ssgproject.content_rule_firefox_preferences-auto-download_actions"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/managing_security_compliance/index |
4.151. libvirt-cim | 4.151. libvirt-cim 4.151.1. RHBA-2011:1587 - libvirt-cim bug fix and enhancement update An updated libvirt-cim package that fixes one bug and adds two enhancements is now available for Red Hat Enterprise Linux 6. The libvirt-cim package contains a Common Information Model (CIM) provider based on Common Manageability Programming Interface (CMPI). It supports most libvirt virtualization features and allows management of multiple libvirt-based platforms. Bug Fix BZ# 728245 Prior to this update, libvirt-cim contained several defects for null variables. As a result, using null variables did not work as expected. This update resolves these defects and now null variables work as expected. Enhancements BZ# 633337 With this update, libvirt-cim supports libvirt networking Access Control Lists (ACL). BZ# 712257 This update aprovides read-only access to ensure that the remote CIM access cannot modify the system state. This is useful when CIM is used only for monitoring and other software is used for virtualization management. All libvirt-cim users are advised to upgrade to this updated package, which fixes this bug and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libvirt-cim |
Distributed tracing | Distributed tracing OpenShift Container Platform 4.11 Distributed tracing installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/distributed_tracing/index |
Chapter 6. Message delivery | Chapter 6. Message delivery 6.1. Writing to a streamed large message To write to a large message, use the BytesMessage.writeBytes() method. The following example reads bytes from a file and writes them to a message: Example: Writing to a streamed large message BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); } 6.2. Reading from a streamed large message To read from a large message, use the BytesMessage.readBytes() method. The following example reads bytes from a message and writes them to a file: Example: Reading from a streamed large message BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); } | [
"BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); }",
"BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); }"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/message_delivery |
Jenkins | Jenkins OpenShift Container Platform 4.18 Jenkins Red Hat OpenShift Documentation Team | [
"podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json",
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/jenkins/index |
Chapter 13. Monitoring RHACS | Chapter 13. Monitoring RHACS You can monitor Red Hat Advanced Cluster Security for Kubernetes (RHACS) by using the built-in monitoring for Red Hat OpenShift or by using custom Prometheus monitoring. If you use RHACS with Red Hat OpenShift, OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. RHACS exposes metrics to Red Hat OpenShift monitoring via an encrypted and authenticated endpoint. 13.1. Monitoring with Red Hat OpenShift Monitoring with Red Hat OpenShift is enabled by default. No configuration is required for this default behavior. Important If you have previously configured monitoring with the Prometheus Operator, consider removing your custom ServiceMonitor resources. RHACS ships with a pre-configured ServiceMonitor for Red Hat OpenShift monitoring. Multiple ServiceMonitors might result in duplicated scraping. Monitoring with Red Hat OpenShift is not supported by Scanner. If you want to monitor Scanner, you must first disable the default Red Hat OpenShift monitoring. Then, configure custom Prometheus monitoring. For more information on disabling Red Hat OpenShift monitoring, see "Disabling Red Hat OpenShift monitoring for Central services by using the RHACS Operator" or "Disabling Red Hat OpenShift monitoring for Central services by using Helm". For more information on configuring Prometheus, see "Monitoring with custom Prometheus". 13.2. Monitoring with custom Prometheus Prometheus is an open-source monitoring and alerting platform. You can use it to monitor health and availability of Central and Sensor components of RHACS. When you enable monitoring, RHACS creates a new monitoring service on port number 9090 and a network policy allowing inbound connections to that port. Note This monitoring service exposes an endpoint that is not encrypted by TLS and has no authorization. Use this only when you do not want to use Red Hat OpenShift monitoring. Before you can use custom Prometheus monitoring, if you have Red Hat OpenShift, you must disable the default monitoring. If you are using Kubernetes, you do not need to perform this step. 13.2.1. Disabling Red Hat OpenShift monitoring for Central services by using the RHACS Operator To disable the default monitoring by using the Operator, change the configuration of the Central custom resource as shown in the following example. For more information on configuration options, see "Central configuration options using the Operator" in the "Additional resources" section. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the RHACS Operator from the list of installed Operators. Click on the Central tab. From the list of Central instances, click on a Central instance for which you want to enable monitoring. Click on the YAML tab and update the YAML configuration as shown in the following example: monitoring: openshift: enabled: false 13.2.2. Disabling Red Hat OpenShift monitoring for Central services by using Helm To disable the default monitoring by using Helm, change the configuration options in the central-services Helm chart. For more information on configuration options, see the documents in the "Additional resources" section. Procedure Update the configuration file with the following value: monitoring.openshift.enabled: false Run the helm upgrade command and specify the configuration files. 13.2.3. Monitoring Central services by using the RHACS Operator You can monitor Central services, Central and Scanner, by changing the configuration of the Central custom resource. For more information on configuration options, see "Central configuration options using the Operator" in the "Additional resources" section. Procedure On the OpenShift Container Platform web console, go to the Operators Installed Operators page. Select the Red Hat Advanced Cluster Security for Kubernetes Operator from the list of installed Operators. Click on the Central tab. From the list of Central instances, click on a Central instance for which you want to enable monitoring for. Click on the YAML tab and update the YAML configuration: For monitoring Central, enable the central.monitoring.exposeEndpoint configuration option for the Central custom resource. For monitoring Scanner, enable the scanner.monitoring.exposeEndpoint configuration option for the Central custom resource. Click Save . 13.3. Monitoring Central services by using Helm You can monitor Central services, Central and Scanner, by changing the configuration options in the central-services Helm chart. For more information, see "Changing configuration options after deploying the central-services Helm chart" in the "Additional resources" section. Procedure Update the values-public.yaml configuration file with the following values: central.exposeMonitoring: true scanner.exposeMonitoring: true Run the helm upgrade command and specify the configuration files. 13.3.1. Monitoring Central by using Prometheus service monitor If you are using the Prometheus Operator, you can use a service monitor to scrape the metrics from Red Hat Advanced Cluster Security for Kubernetes (RHACS). Note If you are not using the Prometheus operator, you must edit the Prometheus configuration files to receive the data from RHACS. Procedure Create a new servicemonitor.yaml file with the following content: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1 1 The labels must match with the Service resource that you want to monitor. For example, central or scanner . Apply the YAML to the cluster: USD oc apply -f servicemonitor.yaml 1 1 If you use Kubernetes, enter kubectl instead of oc . Verification Run the following command to check the status of service monitor: USD oc get servicemonitor --namespace stackrox 1 1 If you use Kubernetes, enter kubectl instead of oc . 13.4. Additional resources Central configuration options using the Operator Changing configuration options after deploying the central-services Helm chart Helm documentation | [
"monitoring: openshift: enabled: false",
"monitoring.openshift.enabled: false",
"central.exposeMonitoring: true scanner.exposeMonitoring: true",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1",
"oc apply -f servicemonitor.yaml 1",
"oc get servicemonitor --namespace stackrox 1"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/monitor-acs |
3.10. Creating a vNIC Profile | 3.10. Creating a vNIC Profile This Ruby example creates a vNIC profile. # Find the root of the tree of services: system_service = connection.system_service # Find the network where you want to add the profile. There may be multiple # networks with the same name (in different data centers, for example). # Therefore, you must look up a specific network by name, in a specific data center. dcs_service = system_service.data_centers_service dc = dcs_service.list(search: 'name=mydc').first networks = connection.follow_link(dc.networks) network = networks.detect { |n| n.name == 'mynetwork' } # Create the vNIC profile, with passthrough and port mirroring disabled: profiles_service = system_service.vnic_profiles_service profiles_service.add( OvirtSDK4::VnicProfile.new( name: 'myprofile', pass_through: { mode: OvirtSDK4::VnicPassThroughMode::DISABLED, }, port_mirroring: false, network: { id: network.id } ) ) For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/VnicProfilesService:add . | [
"Find the root of the tree of services: system_service = connection.system_service Find the network where you want to add the profile. There may be multiple networks with the same name (in different data centers, for example). Therefore, you must look up a specific network by name, in a specific data center. dcs_service = system_service.data_centers_service dc = dcs_service.list(search: 'name=mydc').first networks = connection.follow_link(dc.networks) network = networks.detect { |n| n.name == 'mynetwork' } Create the vNIC profile, with passthrough and port mirroring disabled: profiles_service = system_service.vnic_profiles_service profiles_service.add( OvirtSDK4::VnicProfile.new( name: 'myprofile', pass_through: { mode: OvirtSDK4::VnicPassThroughMode::DISABLED, }, port_mirroring: false, network: { id: network.id } ) )"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/creating_a_vnic_profile |
20.3.2. Authentication | 20.3.2. Authentication Once the transport layer has constructed a secure tunnel to pass information between the two systems, the server tells the client the different authentication methods supported, such as using a private key-encoded signature or typing a password. The client then tries to authenticate itself to the server using one of these supported methods. SSH servers and clients can be configured to allow different types of authentication, which gives each side the optimal amount of control. The server can decide which encryption methods it supports based on its security model, and the client can choose the order of authentication methods to attempt from the available options. Thanks to the secure nature of the SSH transport layer, even seemingly insecure authentication methods, such as a host and password-based authentication, are safe to use. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ssh-protocol-authentication |
Chapter 6. Encrypting cluster transport | Chapter 6. Encrypting cluster transport Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Data Grid clusters to perform certificate authentication so that only nodes with valid identities can join. 6.1. Securing cluster transport with TLS identities Add SSL/TLS identities to a Data Grid Server security realm and use them to secure cluster transport. Nodes in the Data Grid Server cluster then exchange SSL/TLS certificates to encrypt JGroups messages, including RELAY messages if you configure cross-site replication. Prerequisites Install a Data Grid Server cluster. Procedure Create a TLS keystore that contains a single certificate to identify Data Grid Server. You can also use a PEM file if it contains a private key in PKCS#1 or PKCS#8 format, a certificate, and has an empty password: password="" . Note If the certificate in the keystore is not signed by a public certificate authority (CA) then you must also create a trust store that contains either the signing certificate or the public key. Add the keystore to the USDRHDG_HOME/server/conf directory. Add the keystore to a new security realm in your Data Grid Server configuration. Important You should create dedicated keystores and security realms so that Data Grid Server endpoints do not use the same security realm as cluster transport. <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="cluster-transport"> <server-identities> <ssl> <!-- Adds a keystore that contains a certificate that provides SSL/TLS identity to encrypt cluster transport. --> <keystore path="server.pfx" relative-to="infinispan.server.config.path" password="secret" alias="server"/> </ssl> </server-identities> </security-realm> </security-realms> </security> </server> Configure cluster transport to use the security realm by specifying the name of the security realm with the server:security-realm attribute. <infinispan> <cache-container> <transport server:security-realm="cluster-transport"/> </cache-container> </infinispan> Verification When you start Data Grid Server, the following log message indicates that the cluster is using the security realm for cluster transport: 6.2. JGroups encryption protocols To secure cluster traffic, you can configure Data Grid nodes to encrypt JGroups message payloads with secret keys. Data Grid nodes can obtain secret keys from either: The coordinator node (asymmetric encryption). A shared keystore (symmetric encryption). Retrieving secret keys from coordinator nodes You configure asymmetric encryption by adding the ASYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to generate and distribute secret keys. Important When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks. Asymmetric encryption secures cluster traffic as follows: The first node in the Data Grid cluster, the coordinator node, generates a secret key. A joining node performs certificate authentication with the coordinator to mutually verify identity. The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node. The coordinator node encrypts the secret key with the public key and returns it to the joining node. The joining node decrypts and installs the secret key. The node joins the cluster, encrypting and decrypting messages with the secret key. Retrieving secret keys from shared keystores You configure symmetric encryption by adding the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. This allows Data Grid clusters to obtain secret keys from keystores that you provide. Nodes install the secret key from a keystore on the Data Grid classpath at startup. Node join clusters, encrypting and decrypting messages with the secret key. Comparison of asymmetric and symmetric encryption ASYM_ENCRYPT with certificate authentication provides an additional layer of encryption in comparison with SYM_ENCRYPT . You provide keystores that encrypt the requests to coordinator nodes for the secret key. Data Grid automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys. SYM_ENCRYPT , on the other hand, is faster than ASYM_ENCRYPT because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to SYM_ENCRYPT is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic. 6.3. Securing cluster transport with asymmetric encryption Configure Data Grid clusters to generate and distribute secret keys that encrypt JGroups messages. Procedure Create a keystore with certificate chains that enables Data Grid to verify node identity. Place the keystore on the classpath for each node in the cluster. For Data Grid Server, you put the keystore in the USDRHDG_HOME directory. Add the SSL_KEY_EXCHANGE and ASYM_ENCRYPT protocols to a JGroups stack in your Data Grid configuration, as in the following example: <infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore that nodes use to perform certificate authentication. --> <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT2. --> <SSL_KEY_EXCHANGE keystore_name="mykeystore.jks" keystore_password="changeit" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT2"/> <!-- Configures ASYM_ENCRYPT --> <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. --> <!-- The use_external_key_exchange = "true" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. --> <ASYM_ENCRYPT asym_keylength="2048" asym_algorithm="RSA" change_key_on_coord_leave = "false" change_key_on_leave = "false" use_external_key_exchange = "true" stack.combine="INSERT_BEFORE" stack.position="pbcast.NAKACK2"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="USD{infinispan.cluster.name}" stack="encrypt-tcp" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Verification When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack: Data Grid nodes can join the cluster only if they use ASYM_ENCRYPT and can obtain the secret key from the coordinator node. Otherwise the following message is written to Data Grid logs: Additional resources JGroups 4 Manual JGroups 4.2 Schema 6.4. Securing cluster transport with symmetric encryption Configure Data Grid clusters to encrypt JGroups messages with secret keys from keystores that you provide. Procedure Create a keystore that contains a secret key. Place the keystore on the classpath for each node in the cluster. For Data Grid Server, you put the keystore in the USDRHDG_HOME directory. Add the SYM_ENCRYPT protocol to a JGroups stack in your Data Grid configuration. <infinispan> <jgroups> <!-- Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack. --> <stack name="encrypt-tcp" extends="tcp"> <!-- Adds a keystore from which nodes obtain secret keys. --> <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT keystore_name="myKeystore.p12" keystore_type="PKCS12" store_password="changeit" key_password="changeit" alias="myKey" stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT2"/> </stack> </jgroups> <cache-container name="default" statistics="true"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster="USD{infinispan.cluster.name}" stack="encrypt-tcp" node-name="USD{infinispan.node.name:}"/> </cache-container> </infinispan> Verification When you start your Data Grid cluster, the following log message indicates that the cluster is using the secure JGroups stack: Data Grid nodes can join the cluster only if they use SYM_ENCRYPT and can obtain the secret key from the shared keystore. Otherwise the following message is written to Data Grid logs: Additional resources JGroups 4 Manual JGroups 4.2 Schema | [
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"cluster-transport\"> <server-identities> <ssl> <!-- Adds a keystore that contains a certificate that provides SSL/TLS identity to encrypt cluster transport. --> <keystore path=\"server.pfx\" relative-to=\"infinispan.server.config.path\" password=\"secret\" alias=\"server\"/> </ssl> </server-identities> </security-realm> </security-realms> </security> </server>",
"<infinispan> <cache-container> <transport server:security-realm=\"cluster-transport\"/> </cache-container> </infinispan>",
"[org.infinispan.SERVER] ISPN080060: SSL Transport using realm <security_realm_name>",
"<infinispan> <jgroups> <!-- Creates a secure JGroups stack named \"encrypt-tcp\" that extends the default TCP stack. --> <stack name=\"encrypt-tcp\" extends=\"tcp\"> <!-- Adds a keystore that nodes use to perform certificate authentication. --> <!-- Uses the stack.combine and stack.position attributes to insert SSL_KEY_EXCHANGE into the default TCP stack after VERIFY_SUSPECT2. --> <SSL_KEY_EXCHANGE keystore_name=\"mykeystore.jks\" keystore_password=\"changeit\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <!-- Configures ASYM_ENCRYPT --> <!-- Uses the stack.combine and stack.position attributes to insert ASYM_ENCRYPT into the default TCP stack before pbcast.NAKACK2. --> <!-- The use_external_key_exchange = \"true\" attribute configures nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. --> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"default\" statistics=\"true\"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"encrypt-tcp\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>",
"[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it",
"<infinispan> <jgroups> <!-- Creates a secure JGroups stack named \"encrypt-tcp\" that extends the default TCP stack. --> <stack name=\"encrypt-tcp\" extends=\"tcp\"> <!-- Adds a keystore from which nodes obtain secret keys. --> <!-- Uses the stack.combine and stack.position attributes to insert SYM_ENCRYPT into the default TCP stack after VERIFY_SUSPECT2. --> <SYM_ENCRYPT keystore_name=\"myKeystore.p12\" keystore_type=\"PKCS12\" store_password=\"changeit\" key_password=\"changeit\" alias=\"myKey\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> </stack> </jgroups> <cache-container name=\"default\" statistics=\"true\"> <!-- Configures the cluster to use the JGroups stack. --> <transport cluster=\"USD{infinispan.cluster.name}\" stack=\"encrypt-tcp\" node-name=\"USD{infinispan.node.name:}\"/> </cache-container> </infinispan>",
"[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>",
"[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_security_guide/secure-cluster-transport |
Chapter 31. Execution error management | Chapter 31. Execution error management When an execution error occurs for a business process, the process stops and reverts to the most recent stable state (the closest safe point) and continues its execution. If an error of any kind is not handled by the process the entire transaction rolls back, leaving the process instance in the wait state. Execution errors are visible to the caller that sent the request to the process engine. Users with process administrator ( process-admin ) or administrator ( admin ) roles can access execution error messages in Business Central. Execution error messaging provides the following primary benefits: Better traceability Visibility in case of critical processes Reporting and analytics based on error situations External system error handling and compensation 31.1. Viewing process execution errors in Business Central You can view process errors in two locations in Business Central: Menu Manage Process Instances Menu Manage Execution Errors In the Manage Process Instances page, the Errors column displays the number of errors, if any, for the current process instance. Prerequisites An error has occurred while running a process in Business Central. Procedure In Business Central, go to Menu Manage Process Instances and hover over the number shown in the Errors column. Click the number of errors shown in the Errors column to navigate to the Manage Execution Errors page. The Manage Execution Errors page shows a list of errors for all process instances. 31.2. Managing execution errors By definition, every process error that is detected and stored is unacknowledged and must be handled by someone or something (in case of automatic error recovery). You can view a filtered list of errors that were or were not acknowledged. Acknowledging an error saves the user information and time stamp for traceability. Procedure In Business Central, select Menu Manage Execution Errors . Select an error from the list to open the Details tab. The Details tab displays information about the error or errors. Click the Acknowledge button to acknowledge the error. You can view acknowledged errors later by selecting Yes on the Acknowledged filter in the Manage Execution Errors page. If the error is related to a task, a Go to Task button is displayed. Optional: Click the Go to Task button, if applicable, to view the associated job information in the Manage Tasks page. In the Manage Tasks page, you can restart, reschedule, or retry the corresponding task. 31.3. Error filtering For execution errors in the Manage Execution Errors screen, you can use the Filters panel to display only the errors that fit chosen criteria. Prerequisites The Manage Execution Errors screen is open. Procedure Make changes in the Filters panel on the left side of the screen as necessary: Figure 31.1. Filtering Errors - Default View Type Filter execution errors by type. You can select multiple type filters. If you deselect all types, all errors are displayed, regardless of type. The following execution error types are available: DB Task Process Job Process Instance Id Filter by process instance ID. Input: Numeric Job Id Filter by job ID. The job id is created automatically when the job is created. Input: Numeric Id Filter by process instance ID. Input: Numeric Acknowledged Filter errors that have been or have not been acknowledged. Error Date Filtering by the date or time that the error occurred. This filter has the following quick filter options: Last Hour Today Last 24 Hours Last 7 Days Last 30 Days Custom Select the Custom option to open a calendar tool for selecting a date and time range. Figure 31.2. Search by Date 31.4. Auto-acknowledging execution errors By default, execution errors are unacknowledged when they occur. To avoid the need to acknowledge every execution error manually, you can configure jobs to auto-acknowledge some or all execution errors. Note If you configure an auto-acknowledge job, the job runs every day by default. To auto-acknowledge execution errors only once, set the SingleRun parameter to true . Procedure In Business Central, select Menu Manage Jobs . In the top right of the screen, click New Job . Enter any identifier for the job in the Business Key field. In the Type field, enter the type of the auto-acknowledge job: org.jbpm.executor.commands.error.JobAutoAckErrorCommand : Acknowledge all execution errors of type Job where the job to which the error relates is now cancelled, completed, or rescheduled for another execution. org.jbpm.executor.commands.error.TaskAutoAckErrorCommand : Acknowledge all execution errors of type Task where the task to which the error relates is in an exit state (completed, failed, exited, obsolete). org.jbpm.executor.commands.error.ProcessAutoAckErrorCommand : Acknowledge all execution errors of any type where the process instance from which the error originates is already finished (completed or aborted), or the task from which the error originates is already finished. Select a Due On time for the job to be completed: To run the job immediately, select the Run now option. To run the job at a specific time, select Run later . A date and time field appears to the Run later option. Click the field to open the calendar and schedule a specific time and date for the job. Figure 31.3. Example of scheduling an auto-acknowledge job By default, after the initial run the job runs once every day . To change this setting, complete the following steps: Click the Advanced tab. Click the Add Parameter button. Enter the configuration parameter you want to apply to the job: If you want the job to run only once, add the SingleRun parameter with the value of true . If you want he job to run periodically, add the NextRun parameter with the value of a valid time expression, such as 2h , 5d , 1m , and so on. Optional: To set a custom entity manager factory name, enter the EmfName parameter. Figure 31.4. Example of setting parameters for an auto-acknowledge job Click Create to create the job and return to the Manage Jobs page. 31.5. Cleaning up the error list The process engine stores execution errors in the ExecutionErrorInfo database table. If you want to clean up the table and remove errors permanently, you can schedule a job with the org.jbpm.executor.commands.ExecutionErrorCleanupCommand command. The command deletes execution errors that are associated with completed or aborted process instances. Procedure In Business Central, select Menu Manage Jobs . In the top right of the screen, click New Job . Type any identifier for the job into the Business Key field. In the Type field, enter org.jbpm.executor.commands.ExecutionErrorCleanupCommand . Select a Due On time for the job to be completed: To run the job immediately, select the Run now option. To run the job at a specific time, select Run later . A date and time field appears to the Run later option. Click the field to open the calendar and schedule a specific time and date for the job. Click the Advanced tab. Add any of the following parameters as necessary: DateFormat : The format for dates in parameters. If not set, yyyy-MM-dd is used, as in the pattern of the SimpleDateFormat class. EmfName : Name of the custom entity manager factory to be used for queries. SingleRun : Schedules the job for a single execution. If set to true , the job runs once and is not scheduled for repeated execution. NextRun : Schedules the job for repeated execution in a time period. The value must be a valid time expression, for example, 1d , 5h , 10m . OlderThan : Deletes only errors that are older than a set date. The value must be a date. OlderThanPeriod : Deletes only errors that are older than a given time period, compared to the current time. The value must be a valid time expression, for example, 1d , 5h , 10m . ForProcess : Deletes only errors that are related to a process definition. The value must be the identifier of the process definiton. ForProcessInstance : Deletes only errors that are related to a process instance. The value must be the identifier of the process instance. ForDeployment : Deletes only errors that are related to a deployment identifier. The value must be the deployment identifier. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/execution-error-management-con_managing-and-monitoring-processes |
function::user_short | function::user_short Name function::user_short - Retrieves a short value stored in user space Synopsis Arguments addr the user space address to retrieve the short from Description Returns the short value from a given user space address. Returns zero when user space data is not accessible. | [
"user_short:long(addr:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-short |
Chapter 4. Bug fixes | Chapter 4. Bug fixes This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in versions. 4.1. The Cephadm utility Using the --name NODE flag with the cephadm shell to start a stopped OSD no longer returns the wrong image container Previously, in some cases, when using the cephadm shell --name NODE command, the command would start the container with the wrong version of the tools. This would occur when a user has a newer ceph container image on the host than the one that their OSDs are using. With this fix, Cephadm determines the container image for stopped daemons when using the cephadm shell command with the --name flag. Users no longer have any issues with the --name flag, and the command works as expected. Bugzilla:2258542 4.2. The Ceph Ansible utility Playbooks now remove the RHCS version repositories matching the running RHEL version Previously, playbooks would try to remove Red Hat Ceph Storage 4 repositories from RHEL 9 even though they do not exist on RHEL 9. This would cause the playbooks to fail. With this fix, playbooks remove existing Red Hat Ceph Storage version repositories matching the running RHEL version and the correct repositories are removed. Bugzilla:2258940 4.3. NFS Ganesha All memory consumed by the configuration reload process is now released Previously, reload exports would not release all the memory consumed by the configuration reload process causing the memory footprint to increase. With this fix, all memory consumed by the configuration reload process is released resulting in reduced memory footprint. Bugzilla:2265322 4.4. Ceph Dashboard Users can create volumes with multiple hosts in the Ceph dashboard With this fix, users can now create volumes with multiple hosts in the Ceph dashboard. Bugzilla:2241056 Unset subvolume size is no longer set as 'infinite' Previously, the unset subvolume size was set to 'infinite', resulting in the failure of the update. With this fix, the code that sets the size to 'infinite' is removed and the update works as expected. Bugzilla:2251192 Missing options are added in the kernel mount command Previously, a few options were missing in the kernel mount command for attaching the filesystem causing the command to not work as intended. With this fix, the missing options are added and the kernel mount command works as expected. Bugzilla:2266256 Ceph dashboard now supports both NFS v3 and v4-enabled export management Previously, the Ceph dashboard only supported the NFSv4-enabled exports management and not the NFSv3-enabled exports. Due to this, any management done for exports via CLI for NFSv3 was corrupted. With this fix, support for NFSv3-based exports management is enabled by having an additional checkbox. The Ceph dashboard now supports both v3 and v4-enabled export management. Bugzilla:2267814 Access/secret keys are now not compulsory while creating a zone Previously, access/secret keys were compulsory when creating a zone in Ceph Object Gateway multi-site. Due to this, users had to first set the non-system user's keys in the zone and later update with the system user's keys. With this fix, access/secret keys are not compulsory while creating a zone. Bugzilla:2275463 Importing multi-site configuration no longer throws an error on submitting the form Previously, the multi-site period information did not contain the 'realm' name. Due to this, importing the multi-site configuration threw an error on submitting the form. With this fix, the check for fetching 'realm' name from period information is removed and the token import works as expected. Bugzilla:2275861 The Ceph Object Gateway metrics label names are aligned with the Prometheus label naming format and they are now visible in Prometheus Previously, the metrics label names were not aligned with the Prometheus label naming format, causing the Ceph Object Gateway metrics to not be visible in Prometheus. With this fix, the hyphen (-) is replaced with an underscore (_) in Ceph Object Gateway metrics label names, wherever applicable and all Ceph Object Gateway metrics are now visible in Prometheus. Bugzilla:2276340 Full names can now include dot in Ceph dashboard Previously, in the Ceph dashboard, it was not possible to create or modify a full name with a dot in it due to incorrect validation. With this fix, validation is properly adapted to include a dot in full names in Ceph dashboard. Bugzilla:2249812 4.5. Ceph File System MDS metadata with FSMap changes are now added in batches to ensure consistency Previously, monitors would sometimes lose track of MDS metadata during upgrades and cancelled PAXOS transactions resulting in MDS metadata being no longer available. With this fix, MDS metadata with FSMap changes are added in batches to ensure consistency. The ceph mds metadata command now functions as intended across upgrades. Bugzilla:2144472 The ENOTEMPTY output is detected and the message is displayed correctly Previously, when running the subvolume group rm command, the ENOTEMPTY output was not detected in the volume's plugin causing a generalized error message instead of a specific message. With this fix, the ENOTEMPTY output is detected for the subvolume group rm command when there is subvolume present inside the subvolumegroup and the message is displayed correctly. Bugzilla:2240138 MDS now queues the client replay request automatically as part of request cleanup Previously, sometimes, MDS would not queue the client request for replay in the up:client-replay state causing the MDS to hang. With this fix, the client replay request is queued automatically as part of request cleanup and MDS proceeds with failover recovery normally. Bugzilla:2243105 cephfs-mirroring overall performance is improved With this fix, the incremental snapshot sync is corrected, which improves the overall performance of cephfs-mirroring. Bugzilla:2248639 The loner member is set to true Previously, for a file lock in the LOCK_EXCL_XSYN state, the non-loner clients would be issued empty caps. However, since the loner of this state is set to false , it could make the locker to issue the Fcb caps to them, which is incorrect. This would cause some client requests to incorrectly revoke some caps and infinitely wait and cause slow requests. With this fix, the loner member is set to true and as a result the corresponding request is not blocked. Bugzilla:2251258 snap-schedule repeat and retention specification for monthly snapshots is changed from m to M Previously, the snap-schedule repeat specification and retention specification for monthly snapshots was not consistent with other Ceph components. With this fix, the specifications are changed from m to M and it is now consistent with other Ceph components. For example, to retain 5 monthly snapshots, you need to issue the following command: Bugzilla:2264348 ceph-mds no longer crashes when some inodes are replicated in multi-mds cluster Previously, due to incorrect lock assertion in ceph-mds, ceph-mds would crash when some inodes were replicated in a multi-mds cluster. With this fix, the lock state in the assertion is validated and no crash is observed. Bugzilla:2265415 Missing fields, such as date , client_count , filters are added to the --dump output With this fix, missing fields, such as date , client_count , filters are added to the --dump output. Bugzilla:2272468 MDS no longer fails with the assert function during recovery Previously, MDS would sometimes report metadata damage incorrectly when recovering a failed rank and thus, fail with an assert function. With this fix, the startup procedure is corrected and the MDS does not fail with the assert function during recovery. Bugzilla:2272979 The target mon_host details are removed from the peer List and mirror daemon status Previously, the snapshot mirror peer-list showed more information than just the peer list. This output caused confusion if there should be only one MON IP or all the MON host IP's should be displayed. With this fix, mon_host is removed from the fs snapshot mirror peer_list command and the target mon_host details are removed from the peer List and mirror daemon status. Bugzilla:2277143 The target mon_host details are removed from the peer List and mirror daemon status Previously, a regression was introduced by the quiesce protocol code. When killing the client requests, it would just skip choosing the new batch head for the batch operations. This caused the stale batch head requests to stay in the MDS cache forever and then be treated as slow requests. With this fix, choose a new batch head when killing requests and no slow requests are caused by the batch operations. Bugzilla:2277944 File system upgrade happens even when no MDS is up Previously, monitors would not allow an MDS to upgrade a file system when all MDS were down. Due to this, upgrades would fail when the fail_fs setting was set to 'true'. With this fix, monitors allow the upgrades to happen when no MDS is up. Bugzilla:2244417 4.6. Ceph Object Gateway Auto-generated internal topics are no longer shown in the admin topic list command Previously, auto-generated internal topics were exposed to the user via the topic list command due to which the users could see a lot more topics than what they had created. With this fix, internal, auto-generated topics are not shown in the admin topic list command and users now see only the expected list of topics. Bugzilla:1954461 The deprecated bucket name field is no longer shown in the topic list command Previously, in case of pull mode notifications ( pubsub ), the notifications were stored in a bucket. However, despite this mode being deprecated, an empty bucket name field is still shown in the topic list command. With this fix, the empty bucket name field is removed. Bugzilla:1954463 Notifications are now sent on lifecycle transition Previously, logic to dispatch on transition (as distinct from expiration) was missed. Due to this, notifications were not seen on transition. With this fix, new logic is added and notifications are now sent on lifecycle transition. Bugzilla:2166576 RGWCopyObjRequest is fixed and rename operations work as expected Previously, incorrect initialization of RGWCopyObjRequest , after zipper conversion, broke the rename operation. Due to this, many rgw_rename() scenarios failed to copy the source object, and due to a secondary issue, also deleted the source even though the copy had failed. With this fix, RGWCopyObjRequest is corrected and several unit test cases are added for different renaming operations. Bugzilla:2217499 Ceph Object Gateway can no longer be illegally accessed Previously, a variable representing a Ceph Object Gateway role was being accessed before it was initialized, resulting in a segfault. With this fix, operations are reordered and there is no illegal access. The roles are enforced as required. Bugzilla:2252048 An error message is now shown per wrong CSV object structure Previously, a CSV file with unclosed double-quotes would cause an assert, followed by a crash. With this fix, an error message is introduced which pops up per wrong CSV object structure. Bugzilla:2252396 Users no longer encounter 'user not found' error when querying user-related information in the Ceph dashboard Previously, in the Ceph dashboard, end users could not retrieve the user-related information from the Ceph Object Gateway due to the presence of a namespace in the full user_id which the dashboard would not identify, resulting in encountering the "user not found" error. With this fix, a fully constructed user ID, which includes tenant , namespace , and user_id is returned as well as each field is returned individually when a GET request is sent to admin ops for fetching user information. End users can now retrieve the correct user_id , which can be used to further fetch other user-related information from Ceph Object Gateway. Bugzilla:2255255 Ceph Object gateway now passes requests with well-formed payloads of the new stream encoding forms Previously, Ceph Object gateway would not recognize STREAMING-AWS4-HMAC-SHA256-PAYLOAD and STREAMING-UNSIGNED-PAYLOAD-TRAILER encoding forms resulting in request failures. With this fix, the logic to recognize, parse, and wherever applicable, verify new trailing request signatures provided for the new encoding forms is implemented. The Ceph Object gateway now passes requests with well-formed payloads of the new stream encoding forms. Bugzilla:2256967 The check stat calculation for radosgw admin bucket and bucket reshard stat calculation are now correct Previously, due to a code change, radosgw-admin bucket check stat calculation and bucket reshard stat calculation were incorrect when there were objects that transitioned from unversioned to versioned. With this fix, the calculations are corrected and incorrect bucket stat outputs are no longer generated. Bugzilla:2257978 Tail objects are no longer lost during a multipart upload failure Previously, during a multipart upload, if an upload of a part failed due to scenarios, such as a time-out, and the upload was restarted, the cleaning up of the first attempt would remove tail objects from the subsequent attempt. Due to this, the resulting Ceph Object Gateway multipart object would be damaged as some tail objects would be missing. It would respond to a HEAD request but fail during a GET request. With this fix, the code cleans up the first attempt correctly. The resulting Ceph Object Gateway multipart object is no longer damaged and can be read by clients. Bugzilla:2262650 ETag values in the CompleteMultipartUpload and its notifications are now present Previously, changes related to notifications caused the object handle corresponding to the completing multipart upload to not contain the resulting ETag. Due to this, ETags were not present for completing multipart uploads as the result of CompleteMultipartUpload and its notifications. (The correct ETag was computed and stored, so subsequent operations contained a correct ETag result.) With this fix, CompleteMultipartUpload refreshes the object and also prints it as expected. ETag values in the CompleteMultipartUpload and its notifications are present. Bugzilla:2266579 Listing a container (bucket) via swift no longer causes a Ceph Object Gateway crash Previously, a swift-object-storage call path was missing a call to update an object handle with its corresponding bucket (zipper backport issue). Due to this, listing a container (bucket) via swift would cause a Ceph Object Gateway crash when an S3 website was configured for the same bucket. With this fix, the required zipper logic is added and the crash no longer occurs. Bugzilla:2269038 Processing a lifecycle on a bucket with no lifecycle policy does not crash now Previously, attempting to manually process a lifecycle on a bucket with no lifecycle policy induced a null pointer reference causing the radosgw-admin program to crash. With this fix, a check for a null bucket handle is made before operating on the handle to avoid the crash. Bugzilla:2270402 Zone details for a datapool can now be modified The rgw::zone_create() function initializes the default placement target and pool name on zone creation. This function was also previously used for radosgw-admin zone set with exclusive=false . But, zone set does not allow the STANDARD storage class's data_pool to be modified. With this fix, the default-placement target should not be overwritten if it already exists and the zone details for a datapool can be modified as expected. Bugzilla:2254480 Modulo operation on float numbers now return correct results Previously, modulo operation on float numbers returned wrong results. With this fix, the SQL engine is enhanced to handle modulo operations on floats and return correct results. Bugzilla:2254125 SQL statements correctly return results for case-insensitive boolean expressions Previously, SQL statements contained a boolean expression with capital letters in parts of the statement resulting in wrong interpretation and wrong results. With this fix, the interpretation of a statement is case-insensitive and hence, the correct results are returned for any case. Bugzilla:2254122 SQL engine returns the correct NULL value Previously, SQL statements contained cast into type from NULL, as a result of which, the wrong result was returned instead of returning NULL. With this fix, the SQL engine identifies cast from NULL and returns NULL. Bugzilla:2254121 ETags values are now present in CompleteMultipartUpload and its notifications Previously, the changes related to notifications caused the object handle, corresponding to the completing multipart upload, to not contain the resulting ETag. As a result, ETags were not present for CompleteMultipartUpload and its notifications. (The correct ETag was computed and stored, so subsequent operations contained a correct ETag result.) With this fix, CompleteMultipartUpload refreshes the object and also prints it as expected. ETag values are now present in the CompleteMultipartUpload and its notifications. Bugzilla:2249744 Sending workloads with embedded backslash (/) in object names to cloud-sync no longer causes sync failures Previously, incorrect URL-escaping of object paths during cloud sync caused sync failures when workloads contained objects with an embedded backslash (/) in the names, that is, when virtual directory paths were used. With this fix, incorrect escaping is corrected and workloads with embedded backslash (/) in object names can be sent to cloud-sync as expected. Bugzilla:2249068 SQL statements containing boolean expression return boolean types Previously, SQL statements containing boolean expression (a projection) would return a string type instead of boolean type. With this fix, the engine identifies a string as a boolean expression, according to the statement syntax, and the engine successfully returns a boolean type (true/false). Bugzilla:2254582 The work scheduler now takes the date into account in the should_work function Previously, the logic used in the should_work function, that decides whether the lifecycle should start running at the current time, would not take the date notion into account. As a result, any custom work time "XY:TW-AB:CD" would break the lifecycle processing when AB < XY. With this fix, the work scheduler now takes the date into account and the various custom lifecycle work schedules now function as expected. Bugzilla:2255938 merge_and_store_attrs() method no longer causes attribute update operations to fail Previously, a bug in the merge_and_store_attrs() method, which deals with reconciling changed and the unchanged bucket instance attributes, caused some attribute update operations to fail silently. Due to this, some metadata operations on a subset of buckets would fail. For example, a bucket owner change would fail on a bucket with a rate limit set. With this fix, the merge_and_store_attrs() method is fixed and all affected scenarios now work correctly. Bugzilla:2262919 Checksum and malformed trailers can no longer induce a crash Previously, an exception from AWSv4ComplMulti during java AWS4Test.testMultipartUploadWithPauseAWS4 led to a crash induced by some client input, specifically, by those which use checksum trailers. With this fix, an exception handler is implemented in do_aws4_auth_completion() . Checksum and malformed trailers can no longer induce a crash. Bugzilla:2266092 Implementation of improved trailing chunk boundary detection Previously, one valid-form of 0-length trailing chunk boundary formatting was not handled. Due to this, the Ceph Object Gateway failed to correctly recognize the start of the trailing chunk, leading to the 403 error. With this fix, improved trailing chunk boundary detection is implemented and the unexpected 403 error in the anonymous access case no longer occurs. Bugzilla:2266411 Default values for Kafka message and idle timeouts no longer cause hangs Previously, the default values for Kafka message and idle timeouts caused infrequent hangs while waiting for the Kafka broker. With this fix, the timeouts are adjusted and it no longer hangs. Bugzilla:2269381 Delete bucket tagging no longer fails Previously, an incorrect logic in RADOS SAL merge_and_store_attrs() caused deleted attributes to not materialize. This also affected DeleteLifecycle . As a result, a pure attribute delete did not take effect in some code paths. With this fix, the logic to store bucket tags uses RADOS SAL put_info() instead of merge_and_store_attrs() . Delete bucket tagging now succeeds as expected. Bugzilla:2271806 Object mtime now advances on S3 PutACL and ACL changes replicate properly Previously, S3 PutACL operations would not update object mtime . Due to this, the ACL changes once applied would not replicate as the timestamp-based object-change check incorrectly returned false. With this fix, the object mtime always advances on S3 PutACL and ACL changes properly replicate. Bugzilla:2271938 All transition cases can now dispatch notifications Previously, the logic to dispatch notifications on transition was mistakenly scoped to the cloud-transition case due to which notifications on pool transition were not sent. With this fix, notification dispatch is added to the pool transition scope and all transition cases can dispatch notifications. Bugzilla:2279607 RetainUntilDate after the year 2106 no longer truncates and works as expected for new PutObjectRetention requests Previously, PutObjectRetention requests specifying a RetainUntilDate after the year 2106 would truncate, resulting in an earlier date used for object lock enforcement. This did not affect ` PutBucketObjectLockConfiguration` requests, where the duration is specified in days. With this fix, the RetainUntilDate now saves and works as expected for new PutObjectRetention requests. Requests previously existing are not automatically repaired. To fix existing requests, identify the requests by using the HeadObject request based on the x-amz-object-lock-retain-until-date and save again with the RetainUntilDate . For more information, see S3 put object retention Bugzilla:2265890 Bucket lifecycle processing rules are no longer stalled Previously, enumeration of per-shard bucket-lifecycle rules contained a logical error related to concurrent removal of lifecycle rules for a bucket. Due to this, a shard could enter a state which would stall processing of that shard, causing some bucket lifecycle rules to not be processed. With this fix, enumeration can now skip past a removed entry and the lifecycle processing stalls related to this issue are resolved. Bugzilla:2270334 Deleting objects in versioned buckets causes statistics mismatch Due to versioned buckets having a mix of current and non-current objects, deleting objects might cause bucket and user statistics discrepancies on local and remote sites. This does not cause object leaks on either site, just statistics mismatch. Bugzilla:1871333 4.7. Multi-site Ceph Object Gateway Ceph Object Gateway no longer deadlocks during object deletion Previously, during object deletion, the Ceph Object Gateway S3 DeleteObjects would run together with a multi-site deployment, causing the Ceph Object Gateway to deadlock and stop accepting new requests. This was caused by the DeleteObjects requests processing several object deletions at a time. With this fix, the replication logs are serialized and the deadlock is prevented. Bugzilla:2249651 CURL path normalization is now disabled at startup Previously, due to "path normalization" performed by CURL, by default (part of the Ceph Object Gateway replication stack), object names were illegally reformatted during replication. Due to this, objects whose names contained embedded . and .. were not replicated. With this fix, the CURL path normalization is disabled at startup and the affected objects replicate as expected. Bugzilla:2265148 The authentication of the forwarded request on the primary site no longer fails Previously, an S3 request issued to secondary failed if temporary credentials returned by STS were used to sign the request. The failure occured because the request would be forwarded to the primary and signed using a system user's credentials which do not match the temporary credentials in the session token of the forwarded request. As a result of unmatched credentials, the authentication of the forwarded request on the primary site fails, which results in the failure of the S3 operation. With this fix, the authentication is by-passed by using temporary credentials in the session token in case a request is forwarded from secondary to primary. The system user's credentials are used to complete the authentication successfully. Bugzilla:2271399 4.8. RADOS Ceph reports a POOL_APP_NOT_ENABLED warning if the pool has zero objects stored in it Previously, Ceph status failed to report pool application warning if the pool was empty resulting in RGW bucket creation failure if the application tag was enabled for RGW pools. With this fix, Ceph reports a POOL_APP_NOT_ENABLED warning even if the pool has zero objects stored in it. Bugzilla:2029585 Checks are added for uneven OSD weights between two sites in a stretch cluster Previously, there were no checks for equal OSD weights after stretch cluster deployment. Due to this, users could make OSD weights unequal. With this fix, checks are added for uneven OSD weights between two sites in a stretch cluster. The cluster now gives a warning about uneven OSD weight between two sites. Bugzilla:2125107 Autoscaler no longer runs while the norecover flag is set Previously, the autoscaler would run while the norecover flag was set leading to creation of new PGs and these PGs requiring to be backfilled. Running of autoscaler while the norecover flag is set allowed in cases where I/O is blocked on missing or degraded objects in order to avoid client I/O hanging indefinitely. With this fix, the autoscaler does not run while the norecover flag is set. Bugzilla:2134786 The ceph config dump command output is now consistent Previously, the ceph config dump command without the pretty print formatted output showed the localized option name and its value. An example of a normalized vs localized option is shown below: However, the pretty-printed (for example, JSON) version of the command only showed the normalized option name as shown in the example above. The ceph config dump command result was inconsistent between with and without the pretty-print option. With this fix, the output is consistent and always shows the localized option name when using the ceph config dump --format TYPE command, with TYPE as the pretty-print type. Bugzilla:2213766 MGR module no longer takes up one CPU core every minute and CPU usage is normal Previously, expensive calls from the placement group auto-scaler module to get OSDMap from the Monitor resulted in the MGR module taking up one CPU core every minute. Due to this, the CPU usage was high in the MGR daemon. With this fix, the number of OSD map calls made from the placement group auto-scaler module is reduced. The CPU usage is now normal. Bugzilla:2241030 The correct CRUSH location of the OSDs parent (host) is determined Previously, when the osd_memory_target_autotune option was enabled, the memory target was applied at the host level. This was done by using a host mask when auto-tuning the memory. But the code that applied to the memory target would not determine the correct CRUSH location of the parent host for the change to be propagated to the OSD(s) of the host. As a result, none of the OSDs hosted by the machine got notified by the config observer and the osd_memory_target remained unchanged for those set of OSDs. With this fix, the correct CRUSH location of the OSDs parent (host) is determined based on the host mask. This allows the change to propagate to the OSDs on the host. All the OSDs hosted by the machine are notified whenever the auto-tuner applies a new osd_memory_target and the change is reflected. Bugzilla:2244604 Monitors no longer get stuck in elections during crash/shutdown tests Previously, the disallowed_leaders attribute of the MonitorMap was conditionally filled only when entering stretch_mode . However, there were instances wherein monitors that got revived would not enter stretch_mode right away because they would be in a probing state. This led to a mismatch in the disallowed_leaders set between the monitors across the cluster. Due to this, monitors would fail to elect a leader, and the election would be stuck, resulting in Ceph being unresponsive. With this fix, monitors do not have to be in stretch_mode to fill the disallowed_leaders attribute. Monitors no longer get stuck in elections during crash/shutdown tests. Bugzilla:2248939 'Error getting attr on' message no longer occurs Previously, ceph-objectstore-tool listed pgmeta objects when using --op list resulting in "Error getting attr on" message. With this fix, pgmeta objects are skipped and the error message no longer appears. Bugzilla:2251004 LBA alignment in the allocators are no longer used and the OSD daemon does not assert due to allocation failure Previously, OSD daemons would assert and fail to restart which could sometimes lead to data unavailability or data loss. This would happen as the OSD daemon would not assert if the allocator got to 4000 requests and configured with a different allocation unit. With this fix, the LBA alignment in the allocators are not used and the OSD daemon does not assert due to allocation failure. Bugzilla:2260306 A sqlite database using the "libcephsqlite" library no longer may be corrupted due to short reads failing to correctly zero memory pages. Previously, "libcephsqlite" would not handle short reads correctly which may cause corruption of sqlite databases. With this fix, "libcephsqlite" zeros pages correctly for short reads to avoid potential corruption. Bugzilla:2240139 4.9. RBD Mirroring The image status description now shows "orphan (force promoting)" when a peer site is down during force promotion Previously, upon a force promotion, when a peer site went down, the image status description showed "local image linked to unknown peer", which is not a clear description. With this fix, the mirror daemon is improved to show image status description as "orphan (force promoting)". Bugzilla:2190366 rbd_support module no longer fails to recover from repeated block-listing of its client Previously, it was observed that the rbd_support module failed to recover from repeated block-listing of its client due to a recursive deadlock in the rbd_support module, a race condition in the rbd_support module's librbd client, and a bug in the librbd cython bindings that sometimes crashed the ceph-mgr. With this release, all these 3 issues are fixed and rbd_support` module no longer fails to recover from repeated block-listing of its client Bugzilla:2247531 | [
"ceph fs snap-schedule retention add /some/path M 5 --fs cephfs",
"Normalized: mgr/dashboard/ssl_server_port Localized: mgr/dashboard/x/ssl_server_port"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/bug-fixes |
Chapter 4. Configuring the instrumentation | Chapter 4. Configuring the instrumentation The Red Hat build of OpenTelemetry Operator uses an Instrumentation custom resource that defines the configuration of the instrumentation. 4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. Auto-instrumentation runs as follows: The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. 4.2. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.2.1. Instrumentation options Instrumentation options are specified in an Instrumentation custom resource (CR). Sample Instrumentation CR apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values env Common environment variables to define across all the instrumentations. exporter Exporter configuration. propagators Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none resource Resource attributes configuration. sampler Sampling configuration. apacheHttpd Configuration for the Apache HTTP Server instrumentation. dotnet Configuration for the .NET instrumentation. go Configuration for the Go instrumentation. java Configuration for the Java instrumentation. nodejs Configuration for the Node.js instrumentation. python Configuration for the Python instrumentation. Table 4.2. Default protocol for auto-instrumentation Auto-instrumentation Default protocol Java 1.x otlp/grpc Java 2.x otlp/http Python otlp/http .NET otlp/http Go otlp/http Apache HTTP Server otlp/grpc 4.2.2. Configuration of the OpenTelemetry SDK variables You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod: OTEL_SERVICE_NAME OTEL_TRACES_SAMPLER OTEL_TRACES_SAMPLER_ARG OTEL_PROPAGATORS OTEL_RESOURCE_ATTRIBUTES OTEL_EXPORTER_OTLP_ENDPOINT OTEL_EXPORTER_OTLP_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_KEY Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation Value Description "true" Injects the Instrumentation resource with the default name from the current namespace. "false" Injects no Instrumentation resource. "<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from the current namespace. "<namespace>/<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from another namespace. 4.2.3. Exporter configuration Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector. Sample exporter TLS CA configuration using a config map apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system. Sample exporter mTLS configuration using a Secret apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the Secret for the ca_file , cert_file , and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 4 Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 5 Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system. Note You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret. Example configuration for CA bundle injection by using a config map and Instrumentation CR apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: "true" # ... --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt # ... 4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Important The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.4. Parameters for the .spec.apacheHttpd field Name Description Default attrs Attributes specific to the Apache HTTP Server. configPath Location of the Apache HTTP Server configuration. /usr/local/apache2/conf env Environment variables specific to the Apache HTTP Server. image Container image with the Apache SDK and auto-instrumentation. resourceRequirements The compute resource requirements. version Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.2.5. Configuration of the .NET auto-instrumentation Important The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to .NET. image Container image with the .NET SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.2.6. Configuration of the Go auto-instrumentation Important The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Go. image Container image with the Go SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.2.7. Configuration of the Java auto-instrumentation Important The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Java. image Container image with the Java SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.2.8. Configuration of the Node.js auto-instrumentation Important The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Node.js. image Container image with the Node.js SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.2.9. Configuration of the Python auto-instrumentation Important The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Python. image Container image with the Python SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.2.10. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. 4.2.11. Multi-container pods with multiple instrumentations Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation: instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1 1 You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table. Table 4.5. Supported values for the <application_language> Language Value for <application_language> ApacheHTTPD apache DotNet dotnet Java java NGINX inject-nginx NodeJS nodejs Python python SDK sdk 4.2.12. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator. | [
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5",
"apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"",
"instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/otel-configuration-of-instrumentation |
Chapter 1. Overview of AMQ Streams | Chapter 1. Overview of AMQ Streams Red Hat AMQ Streams is a massively-scalable, distributed, and high-performance data streaming platform based on the Apache ZooKeeper and Apache Kafka projects. The main components comprise: Kafka Broker Messaging broker responsible for delivering records from producing clients to consuming clients. Apache ZooKeeper is a core dependency for Kafka, providing a cluster coordination service for highly reliable distributed coordination. Kafka Streams API API for writing stream processor applications. Producer and Consumer APIs Java-based APIs for producing and consuming messages to and from Kafka brokers. Kafka Bridge AMQ Streams Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. Kafka Connect A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka MirrorMaker Replicates data between two Kafka clusters, within or across data centers. Kafka Exporter An exporter used in the extraction of Kafka metrics data for monitoring. Kafka Cruise Control Rebalances a Kafka cluster based on a set of optimization goals and capacity limits. A cluster of Kafka brokers is the hub connecting all these components. The broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Figure 1.1. AMQ Streams architecture 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. Supported Configurations In order to be running in a supported configuration, AMQ Streams must be running in one of the following JVM versions and on one of the supported operating systems. Table 1.1. List of supported Java Virtual Machines Java Virtual Machine Version OpenJDK 1.8, 11 OracleJDK 1.8, 11 IBM JDK 1.8 Table 1.2. List of supported Operating Systems Operating System Architecture Version Red Hat Enterprise Linux x86_64 7.x, 8.x 1.4. Document conventions Replaceables In this document, replaceable text is styled in monospace , with italics, uppercase, and hyphens. For example, in the following code, you will want to replace BOOTSTRAP-ADDRESS and TOPIC-NAME with your own address and topic name: bin/kafka-console-consumer.sh --bootstrap-server BOOTSTRAP-ADDRESS --topic TOPIC-NAME --from-beginning | [
"bin/kafka-console-consumer.sh --bootstrap-server BOOTSTRAP-ADDRESS --topic TOPIC-NAME --from-beginning"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/overview-str |
Appendix G. Examples using the Secure Token Service APIs | Appendix G. Examples using the Secure Token Service APIs These examples are using Python's boto3 module to interface with the Ceph Object Gateway's implementation of the Secure Token Service (STS). In these examples, TESTER2 assumes a role created by TESTER1 , as to access S3 resources owned by TESTER1 based on the permission policy attached to the role. The AssumeRole example creates a role, assigns a policy to the role, then assumes a role to get temporary credentials and access to S3 resources using those temporary credentials. The AssumeRoleWithWebIdentity example authenticates users using an external application with Keycloak, an OpenID Connect identity provider, assumes a role to get temporary credentials and access S3 resources according to the permission policy of the role. AssumeRole Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER1\"]},\"Action\":[\"sts:AssumeRole\"]}]}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() AssumeRoleWithWebIdentity Example import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) oidc_response = iam_client.create_open_id_connect_provider( Url=<URL of the OpenID Connect Provider>, ClientIDList=[ <Client id registered with the IDP> ], ThumbprintList=[ <IDP THUMBPRINT> ] ) policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"Federated\":\[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\"\]\},\"Action\":\[\"sts:AssumeRoleWithWebIdentity\"\],\"Condition\":\{\"StringEquals\":\{\"localhost:8080/auth/realms/demo:app_id\":\"customer-portal\"\}\}\}\]\}" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets() Additional Resources See the Test S3 Access section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide for more details on using Python's boto module. | [
"import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER1\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = sts_client.assume_role( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600 ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()",
"import boto3 iam_client = boto3.client('iam', aws_access_key_id= ACCESS_KEY_OF_TESTER1 , aws_secret_access_key= SECRET_KEY_OF_TESTER1 , endpoint_url=<IAM URL>, region_name='' ) oidc_response = iam_client.create_open_id_connect_provider( Url=<URL of the OpenID Connect Provider>, ClientIDList=[ <Client id registered with the IDP> ], ThumbprintList=[ <IDP THUMBPRINT> ] ) policy_document = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"Federated\\\":\\[\\\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRoleWithWebIdentity\\\"\\],\\\"Condition\\\":\\{\\\"StringEquals\\\":\\{\\\"localhost:8080/auth/realms/demo:app_id\\\":\\\"customer-portal\\\"\\}\\}\\}\\]\\}\" role_response = iam_client.create_role( AssumeRolePolicyDocument=policy_document, Path='/', RoleName='S3Access', ) role_policy = \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\\"s3:*\\\",\\\"Resource\\\":\\\"arn:aws:s3:::*\\\"}}\" response = iam_client.put_role_policy( RoleName='S3Access', PolicyName='Policy1', PolicyDocument=role_policy ) sts_client = boto3.client('sts', aws_access_key_id= ACCESS_KEY_OF_TESTER2 , aws_secret_access_key= SECRET_KEY_OF_TESTER2 , endpoint_url=<STS URL>, region_name='', ) response = client.assume_role_with_web_identity( RoleArn=role_response['Role']['Arn'], RoleSessionName='Bob', DurationSeconds=3600, WebIdentityToken=<Web Token> ) s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=<S3 URL>, region_name='',) bucket_name = 'my-bucket' s3bucket = s3client.create_bucket(Bucket=bucket_name) resp = s3client.list_buckets()"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/examples-using-the-secure-token-service-apis_dev |
Chapter 2. Creating a model registry | Chapter 2. Creating a model registry You can create a model registry to store, share, version, deploy, and track your models. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. A cluster administrator has configured and enabled the model registry component in your OpenShift AI deployment. For more information, see Configuring the model registry component . The model registry component is enabled for your OpenShift AI deployment. You have access to an external MySQL database which uses at least MySQL version 5.x. However, Red Hat recommends that you use MySQL version 8.x. Note The mysql_native_password authentication plugin is required for the ML Metadata component to successfully connect to your database. mysql_native_password is disabled by default in MySQL 8.4 and later. If your database uses MySQL 8.4 or later, you must update your MySQL deployment to enable the mysql_native_password plugin. For more information about enabling the mysql_native_password plugin, see Native Pluggable Authentication in the MySQL documentation. Procedure From the OpenShift AI dashboard, click Settings Model registry settings . Click Create model registry . The Create model registry dialog opens. In the Name field, enter a name for the model registry. Optional: Click Edit resource name , and then enter a specific resource name for the model registry in the Resource name field. By default, the resource name will match the name of the model registry. Important Resource names are what your resources are labeled as in OpenShift. Your resource name cannot exceed 253 characters, must consist of lowercase alphanumeric characters or - , and must start and end with an alphanumeric character. Resource names are not editable after creation. The resource name must not match the name of any other model registry resource in your OpenShift cluster. Optional: In the Description field, enter a description for the model registry. In the Connect to external MySQL database section, enter the information for the external database where your model data is stored. In the Host field, enter the database's host name. If the database is running in the rhoai-model-registries namespace, enter only the hostname for the database. If the database is running in a different namespace from rhoai-model-registries , enter the database hostname details in <host name>.<namespace>.svc.cluster.local format. In the Port field, enter the port number for the database. In the Username field, enter the default user name that is connected to the database. In the Password field, enter the password for the default user account. In the Database field, enter the database name. Optional: Select the Add CA certificate to secure database connection to use a certificate with your database connection. Click Use cluster-wide CA bundle to use the ca-bundle.crt bundle in the odh-trusted-ca-bundle ConfigMap. Click Use Red Hat OpenShift AI CA bundle to use the odh-ca-bundle.crt bundle in the odh-trusted-ca-bundle ConfigMap. Click Choose from existing certificates to select an existing certificate. You can select the key of any ConfigMap or secret in the rhoai-model-registries namespace. From the Resource list, select a ConfigMap or secret. From the Key list, select a key. Click Upload new certificate to upload a new certificate as a ConfigMap. Drag and drop the PEM file for your certificate into the Certificate field, or click Upload to select a file from your local machine's file system. Note Uploading a certificate creates the db-credential ConfigMap with the ca.crt key. To upload a certificate as a secret, you must create a secret in the OpenShift rhoai-model-registries namespace, and then select it as an existing certificate when you create your model registry. For more information about creating secrets in OpenShift, see OpenShift Dedicated: Providing sensitive data to pods by using secrets and Red Hat OpenShift Service on AWS: Providing sensitive data to pods by using secrets . Click Create . Note To find the resource name or type of a model registry, click the help icon beside the registry name. Resource names and types are used to find your resources in OpenShift. Verification The new model registry appears on the Model Registry Settings page. You can edit the model registry by clicking the action menu ( ... ) beside it, and then clicking Edit model registry . You can register a model with the model registry from the Model Registry tab. For more information about working with model registries, see Working with model registries . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_model_registries/creating-a-model-registry_managing-model-registries |
Securing Applications and Services Guide | Securing Applications and Services Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services | [
"/realms/{realm-name}/.well-known/openid-configuration",
"/realms/{realm-name}/protocol/openid-connect/auth",
"/realms/{realm-name}/protocol/openid-connect/token",
"/realms/{realm-name}/protocol/openid-connect/userinfo",
"/realms/{realm-name}/protocol/openid-connect/logout",
"/realms/{realm-name}/protocol/openid-connect/certs",
"/realms/{realm-name}/protocol/openid-connect/token/introspect",
"/realms/{realm-name}/clients-registrations/openid-connect",
"/realms/{realm-name}/protocol/openid-connect/revoke",
"/realms/{realm-name}/protocol/openid-connect/auth/device",
"/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth",
"curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"",
"npm install keycloak-js",
"import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); try { const authenticated = await keycloak.init(); if (authenticated) { console.log('User is authenticated'); } else { console.log('User is not authenticated'); } } catch (error) { console.error('Failed to initialize adapter:', error); }",
"await keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });",
"<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>",
"await keycloak.init({ onLoad: 'login-required' });",
"async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }",
"try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();",
"await keycloak.init({ flow: 'implicit' })",
"await keycloak.init({ flow: 'hybrid' });",
"await keycloak.init({ adapter: 'cordova-native' });",
"<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />",
"import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: KeycloakCapacitorAdapter, });",
"import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { async login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); await keycloak.init({ adapter: MyCustomAdapter, });",
"// Recommended way to initialize the adapter. new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); // Alternatively a string to the path of the `keycloak.json` file. // Has some performance implications, as it will load the keycloak.json file from the server. // This version might also change in the future and is therefore not recommended. new Keycloak(\"http://keycloak-server/keycloak.json\");",
"try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }",
"try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }",
"keycloak.onAuthSuccess = () => console.log('Authenticated!');",
"mkdir myapp && cd myapp",
"\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-26.0.10.tgz\" }",
"const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });",
"npm install express-session",
"\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },",
"npm run start",
"const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);",
"const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);",
"const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });",
"const keycloak = new Keycloak({ scope: 'offline_access' });",
"npm install express",
"const express = require('express'); const app = express();",
"app.use( keycloak.middleware() );",
"app.listen(3000, function () { console.log('App listening on port 3000'); });",
"const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );",
"app.get( '/complain', keycloak.protect(), complaintHandler );",
"app.get( '/special', keycloak.protect('special'), specialHandler );",
"app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );",
"app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );",
"app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });",
"app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });",
"keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})",
"app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted",
"function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );",
"Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };",
"app.use( keycloak.middleware( { logout: '/logoff' } ));",
"https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout",
"app.use( keycloak.middleware( { admin: '/callbacks' } );",
"LoadModule auth_openidc_module modules/mod_auth_openidc.so ServerName USD{HOSTIP} <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html #this is required by mod_auth_openidc OIDCCryptoPassphrase a-random-secret-used-by-apache-oidc-and-balancer OIDCProviderMetadataURL USD{KC_ADDR}/realms/USD{KC_REALM}/.well-known/openid-configuration OIDCClientID USD{CLIENT_ID} OIDCClientSecret USD{CLIENT_SECRET} OIDCRedirectURI http://USD{HOSTIP}/USD{CLIENT_APP_NAME}/redirect_uri # maps the preferred_username claim to the REMOTE_USER environment variable OIDCRemoteUserClaim preferred_username <Location /USD{CLIENT_APP_NAME}/> AuthType openid-connect Require valid-user </Location> </VirtualHost>",
"<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>5.0.0.Final</version> <configuration> <feature-packs> <feature-pack> <location>wildfly@maven(org.jboss.universe:community-universe)#32.0.1.Final</location> </feature-pack> <feature-pack> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-adapter-galleon-pack</artifactId> <version>26.0.10</version> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>11.0.2.Final</version> <configuration> <feature-packs> <feature-pack> <location>wildfly@maven(org.jboss.universe:community-universe)#32.0.1.Final</location> </feature-pack> <feature-pack> <groupId>org.keycloak</groupId> <artifactId>keycloak-saml-adapter-galleon-pack</artifactId> <version>26.0.10</version> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<plugin> <groupId>org.jboss.eap.plugins</groupId> <artifactId>eap-maven-plugin</artifactId> <version>1.0.0.Final-redhat-00014</version> <configuration> <channels> <channel> <manifest> <groupId>org.jboss.eap.channels</groupId> <artifactId>eap-8.0</artifactId> </manifest> </channel> </channels> <feature-packs> <feature-pack> <location>org.keycloak:keycloak-saml-adapter-galleon-pack</location> </feature-pack> </feature-packs> <layers> <layer>core-server</layer> <layer>web-server</layer> <layer>jaxrs-server</layer> <layer>datasources-web-server</layer> <layer>webservices</layer> <layer>keycloak-saml</layer> <layer>keycloak-client-saml</layer> <layer>keycloak-client-saml-ejb</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin>",
"<keycloak-saml-adapter xmlns=\"urn:keycloak:saml:adapter\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:keycloak:saml:adapter https://www.keycloak.org/schema/keycloak_saml_adapter_1_10.xsd\"> <SP entityID=\"http://localhost:8081/sales-post-sig/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\" isPassive=\"false\" turnOffChangeSessionIdOnLogin=\"false\" autodetectBearerOnly=\"false\"> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-sig/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-sig/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP entityID=\"idp\" signaturesRequired=\"true\"> <SingleSignOnService requestBinding=\"POST\" bindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" /> <SingleLogoutService requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8081/realms/demo/protocol/saml\" /> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </keycloak-saml-adapter>",
"<web-app xmlns=\"https://jakarta.ee/xml/ns/jakartaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://jakarta.ee/xml/ns/jakartaee https://jakarta.ee/xml/ns/jakartaee/web-app_6_0.xsd\" version=\"6.0\"> <module-name>customer-portal</module-name> <security-constraint> <web-resource-collection> <web-resource-name>Admins</web-resource-name> <url-pattern>/admin/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Customers</web-resource-name> <url-pattern>/customers/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>KEYCLOAK-SAML</auth-method> <realm-name>this is ignored currently</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> <security-role> <role-name>user</role-name> </security-role> </web-app>",
"<extensions> <extension module=\"org.keycloak.keycloak-saml-adapter-subsystem\"/> </extensions> <profile> <subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"WAR MODULE NAME.war\"> <SP entityID=\"APPLICATION URL\"> </SP> </secure-deployment> </subsystem> </profile>",
"<subsystem xmlns=\"urn:jboss:domain:keycloak-saml:1.1\"> <secure-deployment name=\"saml-post-encryption.war\"> <SP entityID=\"http://localhost:8080/sales-post-enc/\" sslPolicy=\"EXTERNAL\" nameIDPolicyFormat=\"urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified\" logoutPage=\"/logout.jsp\" forceAuthentication=\"false\"> <Keys> <Key signing=\"true\" encryption=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"http://localhost:8080/sales-post-enc/\" password=\"test123\"/> <Certificate alias=\"http://localhost:8080/sales-post-enc/\"/> </KeyStore> </Key> </Keys> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> <RoleIdentifiers> <Attribute name=\"Role\"/> </RoleIdentifiers> <IDP entityID=\"idp\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\"/> <SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\" redirectBindingUrl=\"http://localhost:8080/realms/saml-demo/protocol/saml\"/> <Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"saml-demo\"/> </KeyStore> </Key> </Keys> </IDP> </SP> </secure-deployment> </subsystem>",
"samesite-cookie(mode=None, cookie-pattern=JSESSIONID)",
"<context-param> <param-name>keycloak.sessionIdMapperUpdater.classes</param-name> <param-value>org.keycloak.adapters.saml.wildfly.infinispan.InfinispanSessionCacheIdMapperUpdater</param-value> </context-param>",
"package org.keycloak.adapters.saml; public class SamlPrincipal implements Serializable, Principal { /** * Get full saml assertion * * @return */ public AssertionType getAssertion() { } /** * Get SAML subject sent in assertion * * @return */ public String getSamlSubject() { } /** * Subject nameID format * * @return */ public String getNameIDFormat() { } @Override public String getName() { } /** * Convenience function that gets Attribute value by attribute name * * @param name * @return */ public List<String> getAttributes(String name) { } /** * Convenience function that gets Attribute value by attribute friendly name * * @param friendlyName * @return */ public List<String> getFriendlyAttributes(String friendlyName) { } /** * Convenience function that gets first value of an attribute by attribute name * * @param name * @return */ public String getAttribute(String name) { } /** * Convenience function that gets first value of an attribute by attribute name * * * @param friendlyName * @return */ public String getFriendlyAttribute(String friendlyName) { } /** * Get set of all assertion attribute names * * @return */ public Set<String> getAttributeNames() { } /** * Get set of all assertion friendly attribute names * * @return */ public Set<String> getFriendlyNames() { } }",
"<error-page> <error-code>403</error-code> <location>/ErrorHandler</location> </error-page>",
"public class SamlAuthenticationError implements AuthenticationError { public static enum Reason { EXTRACTION_FAILURE, INVALID_SIGNATURE, ERROR_STATUS } public Reason getReason() { return reason; } public StatusResponseType getStatus() { return status; } }",
"package example; import java.io.InputStream; import org.keycloak.adapters.saml.SamlConfigResolver; import org.keycloak.adapters.saml.SamlDeployment; import org.keycloak.adapters.saml.config.parsers.DeploymentBuilder; import org.keycloak.adapters.saml.config.parsers.ResourceLoader; import org.keycloak.adapters.spi.HttpFacade; import org.keycloak.saml.common.exceptions.ParsingException; public class SamlMultiTenantResolver implements SamlConfigResolver { @Override public SamlDeployment resolve(HttpFacade.Request request) { String host = request.getHeader(\"Host\"); String realm = null; if (host.contains(\"tenant1\")) { realm = \"tenant1\"; } else if (host.contains(\"tenant2\")) { realm = \"tenant2\"; } else { throw new IllegalStateException(\"Not able to guess the keycloak-saml.xml to load\"); } InputStream is = getClass().getResourceAsStream(\"/\" + realm + \"-keycloak-saml.xml\"); if (is == null) { throw new IllegalStateException(\"Not able to find the file /\" + realm + \"-keycloak-saml.xml\"); } ResourceLoader loader = new ResourceLoader() { @Override public InputStream getResourceAsStream(String path) { return getClass().getResourceAsStream(path); } }; try { return new DeploymentBuilder().build(is, loader); } catch (ParsingException e) { throw new IllegalStateException(\"Cannot load SAML deployment\", e); } } }",
"<web-app> <context-param> <param-name>keycloak.config.resolver</param-name> <param-value>example.SamlMultiTenantResolver</param-value> </context-param> </web-app>",
"<samlp:Status> <samlp:StatusCode Value=\"urn:oasis:names:tc:SAML:2.0:status:Responder\"> <samlp:StatusCode Value=\"urn:oasis:names:tc:SAML:2.0:status:AuthnFailed\"/> </samlp:StatusCode> <samlp:StatusMessage>authentication_expired</samlp:StatusMessage> </samlp:Status>",
"<SP entityID=\"sp\" sslPolicy=\"ssl\" nameIDPolicyFormat=\"format\" forceAuthentication=\"true\" isPassive=\"false\" keepDOMAssertion=\"true\" autodetectBearerOnly=\"false\"> </SP>",
"<Keys> <Key signing=\"true\" > </Key> </Keys>",
"<Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"myPrivate\" password=\"test123\"/> <Certificate alias=\"myCertAlias\"/> </KeyStore> </Key> </Keys>",
"<Keys> <Key signing=\"true\"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys>",
"<SP ...> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> </SP> <SP ...> <PrincipalNameMapping policy=\"FROM_ATTRIBUTE\" attribute=\"email\" /> </SP>",
"<RoleIdentifiers> <Attribute name=\"Role\"/> <Attribute name=\"member\"/> <Attribute name=\"memberOf\"/> </RoleIdentifiers>",
"<RoleIdentifiers> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP> </IDP>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.file.location\" value=\"/opt/mappers/roles.properties\"/> </RoleMappingsProvider>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/conf/roles.properties\"/> </RoleMappingsProvider>",
"roleA=roleX,roleY roleB= kc_user=roleZ",
"role\\u0020A=roleX,roleY",
"<IDP entityID=\"idp\" signaturesRequired=\"true\" signatureAlgorithm=\"RSA_SHA1\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> </IDP>",
"<AllowedClockSkew unit=\"MILLISECONDS\">3500</AllowedClockSkew>",
"<SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"post\" bindingUrl=\"url\"/>",
"<SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"redirect\" responseBinding=\"post\" postBindingUrl=\"posturl\" redirectBindingUrl=\"redirecturl\">",
"<IDP entityID=\"idp\"> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP>",
"<HttpClient connectionPoolSize=\"10\" disableTrustManager=\"false\" allowAnyHostname=\"false\" clientKeystore=\"classpath:keystore.jks\" clientKeystorePassword=\"pwd\" truststore=\"classpath:truststore.jks\" truststorePassword=\"pwd\" proxyUrl=\"http://proxy/\" socketTimeout=\"5000\" connectionTimeout=\"6000\" connectionTtl=\"500\" />",
"install httpd mod_auth_mellon mod_ssl openssl",
"mkdir /etc/httpd/saml2",
"<Location / > MellonEnable info MellonEndpointPath /mellon/ MellonSPMetadataFile /etc/httpd/saml2/mellon_metadata.xml MellonSPPrivateKeyFile /etc/httpd/saml2/mellon.key MellonSPCertFile /etc/httpd/saml2/mellon.crt MellonIdPMetadataFile /etc/httpd/saml2/idp_metadata.xml </Location> <Location /private > AuthType Mellon MellonEnable auth Require valid-user </Location>",
"MellonSecureCookie On MellonCookieSameSite none",
"fqdn=`hostname` mellon_endpoint_url=\"https://USD{fqdn}/mellon\" mellon_entity_id=\"USD{mellon_endpoint_url}/metadata\" file_prefix=\"USD(echo \"USDmellon_entity_id\" | sed 's/[^A-Za-z.]/_/g' | sed 's/__*/_/g')\"",
"/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh USDmellon_entity_id USDmellon_endpoint_url",
"mv USD{file_prefix}.cert /etc/httpd/saml2/mellon.crt mv USD{file_prefix}.key /etc/httpd/saml2/mellon.key mv USD{file_prefix}.xml /etc/httpd/saml2/mellon_metadata.xml",
"curl -k -o /etc/httpd/saml2/idp_metadata.xml https://USDidp_host/realms/test_realm/protocol/saml/descriptor",
"apachectl configtest",
"systemctl restart httpd.service",
"auth: token: realm: http://localhost:8080/realms/master/protocol/docker-v2/auth service: docker-test issuer: http://localhost:8080/realms/master",
"REGISTRY_AUTH_TOKEN_REALM: http://localhost:8080/realms/master/protocol/docker-v2/auth REGISTRY_AUTH_TOKEN_SERVICE: docker-test REGISTRY_AUTH_TOKEN_ISSUER: http://localhost:8080/realms/master",
"docker login localhost:5000 -u USDusername Password: ******* Login Succeeded",
"Authorization: bearer eyJhbGciOiJSUz",
"Authorization: basic BASE64(client-id + ':' + client-secret)",
"curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default",
"String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();",
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg",
"kcreg.sh config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client",
"c:\\> kcreg config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client",
"kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcreg.sh help",
"c:\\> kcreg help",
"kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient",
"kcreg.sh create -s clientId=myclient -t USDTOKEN",
"c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient",
"c:\\> kcreg create -s clientId=myclient -t %TOKEN%",
"kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o",
"C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o",
"kcreg.sh get myclient",
"C:\\> kcreg get myclient",
"kcreg.sh get myclient -e install > keycloak.json",
"C:\\> kcreg get myclient -e install > keycloak.json",
"kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json",
"C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json",
"kcreg.sh update myclient -s enabled=false -d redirectUris",
"C:\\> kcreg update myclient -s enabled=false -d redirectUris",
"kcreg.sh update myclient --merge -d redirectUris -f mychanges.json",
"C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json",
"kcreg.sh delete myclient",
"C:\\> kcreg delete myclient",
"/realms/{realm}/protocol/openid-connect/token",
"{ \"access_token\" : \".....\", \"refresh_token\" : \".....\", \"expires_in\" : \"....\" }",
"{ \"error\" : \"....\" \"error_description\" : \"....\" }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:refresh_token\" -d \"audience=target-client\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"refresh_token\" : \"....\", \"expires_in\" : 3600 }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"requested_issuer=google\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"expires_in\" : 3600 \"account-link-url\" : \"https://....\" }",
"{ \"error\" : \"....\", \"error_description\" : \"...\" \"account-link-url\" : \"https://....\" }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" -d \"subject_issuer=myOidcProvider\" --data-urlencode \"subject_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"audience=target-client\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"{ \"access_token\" : \"....\", \"refresh_token\" : \"....\", \"expires_in\" : 3600 }",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"subject_token=....\" --data-urlencode \"requested_token_type=urn:ietf:params:oauth:token-type:access_token\" -d \"audience=target-client\" -d \"requested_subject=wburke\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"curl -X POST -d \"client_id=starting-client\" -d \"client_secret=the client secret\" --data-urlencode \"grant_type=urn:ietf:params:oauth:grant-type:token-exchange\" -d \"requested_subject=wburke\" http://localhost:8080/realms/myrealm/protocol/openid-connect/token",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-admin-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency>",
"import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmRepresentation realm = keycloak.realm(\"master\").toRepresentation();",
"<dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency> </dependencies>",
"{ \"realm\": \"hello-world-authz\", \"auth-server-url\" : \"http://localhost:8080\", \"resource\" : \"hello-world-authz-service\", \"credentials\": { \"secret\": \"secret\" } }",
"// create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create();",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission(\"Default Resource\"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName(\"New Resource\"); newResource.setType(\"urn:hello-world-authz:resources:example\"); newResource.addScope(new ScopeRepresentation(\"urn:hello-world-authz:scopes:view\")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource);",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println(\"Token status is: \" + requestingPartyToken.getActive()); System.out.println(\"Permissions granted by the server: \"); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }",
"\"credentials\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\" }",
"\"credentials\": { \"jwt\": { \"client-keystore-file\": \"classpath:keystore-client.jks\", \"client-keystore-type\": \"JKS\", \"client-keystore-password\": \"storepass\", \"client-key-password\": \"keypass\", \"client-key-alias\": \"clientkey\", \"token-expiration\": 10 } }",
"\"credentials\": { \"secret-jwt\": { \"secret\": \"19666a4f-32dd-4049-b082-684c74115f28\", \"algorithm\": \"HS512\" } }",
"<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>999.0.0-SNAPSHOT</version> </dependency>",
"{ \"enforcement-mode\" : \"ENFORCING\", \"paths\": [ { \"path\" : \"/users/*\", \"methods\" : [ { \"method\": \"GET\", \"scopes\" : [\"urn:app.com:scopes:view\"] }, { \"method\": \"POST\", \"scopes\" : [\"urn:app.com:scopes:create\"] } ] } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-request-parameter\": \"{request.parameter['a']}\", \"claim-from-header\": \"{request.header['b']}\", \"claim-from-cookie\": \"{request.cookie['c']}\", \"claim-from-remoteAddr\": \"{request.remoteAddr}\", \"claim-from-method\": \"{request.method}\", \"claim-from-uri\": \"{request.uri}\", \"claim-from-relativePath\": \"{request.relativePath}\", \"claim-from-secure\": \"{request.secure}\", \"claim-from-json-body-object\": \"{request.body['/a/b/c']}\", \"claim-from-json-body-array\": \"{request.body['/d/1']}\", \"claim-from-body\": \"{request.body}\", \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"], \"param-replace-multiple-placeholder\": \"Test {keycloak.access_token['/custom_claim/0']} and {request.parameter['a']}\" } } } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"http\": { \"claims\": { \"claim-a\": \"/a\", \"claim-d\": \"/d\", \"claim-d0\": \"/d/0\", \"claim-d-all\": [ \"/d/0\", \"/d/1\" ] }, \"url\": \"http://mycompany/claim-provider\", \"method\": \"POST\", \"headers\": { \"Content-Type\": \"application/x-www-form-urlencoded\", \"header-b\": [ \"header-b-value1\", \"header-b-value2\" ], \"Authorization\": \"Bearer {keycloak.access_token}\" }, \"parameters\": { \"param-a\": [ \"param-a-value1\", \"param-a-value2\" ], \"param-subject\": \"{keycloak.access_token['/sub']}\", \"param-user-name\": \"{keycloak.access_token['/preferred_username']}\", \"param-other-claims\": \"{keycloak.access_token['/custom_claim']}\" } } } } ] }",
"{ \"paths\": [ { \"path\": \"/protected/resource\", \"claim-information-point\": { \"claims\": { \"claim-from-static-value\": \"static value\", \"claim-from-multiple-static-value\": [\"static\", \"value\"] } } } ] }",
"public class MyClaimInformationPointProviderFactory implements ClaimInformationPointProviderFactory<MyClaimInformationPointProvider> { @Override public String getName() { return \"my-claims\"; } @Override public void init(PolicyEnforcer policyEnforcer) { } @Override public MyClaimInformationPointProvider create(Map<String, Object> config) { return new MyClaimInformationPointProvider(config); } }",
"public class MyClaimInformationPointProvider implements ClaimInformationPointProvider { private final Map<String, Object> config; public MyClaimInformationPointProvider(Map<String, Object> config) { this.config = config; } @Override public Map<String, List<String>> resolve(HttpFacade httpFacade) { Map<String, List<String>> claims = new HashMap<>(); // put whatever claim you want into the map return claims; } }",
"HttpServletRequest request = // obtain javax.servlet.http.HttpServletRequest AuthorizationContext authzContext = (AuthorizationContext) request.getAttribute(AuthorizationContext.class.getName());",
"if (authzContext.hasResourcePermission(\"Project Resource\")) { // user can access the Project Resource } if (authzContext.hasResourcePermission(\"Admin Resource\")) { // user can access administration resources } if (authzContext.hasScopePermission(\"urn:project.com:project:create\")) { // user can create new projects }",
"if (User.hasRole('user')) { // user can access the Project Resource } if (User.hasRole('admin')) { // user can access administration resources } if (User.hasRole('project-manager')) { // user can create new projects }",
"ClientAuthorizationContext clientContext = ClientAuthorizationContext.class.cast(authzContext); AuthzClient authzClient = clientContext.getClient();",
"{ \"truststore\": \"path_to_your_trust_store\", \"truststore-password\": \"trust_store_password\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/securing_applications_and_services_guide/index |
Chapter 7. Configuring kube-rbac-proxy for Knative for Apache Kafka | Chapter 7. Configuring kube-rbac-proxy for Knative for Apache Kafka The kube-rbac-proxy component provides internal authentication and authorization capabilities for Knative for Apache Kafka. 7.1. Configuring kube-rbac-proxy resources for Knative for Apache Kafka You can globally override resource allocation for the kube-rbac-proxy container by using the OpenShift Serverless Operator CR. Note You can also override resource allocation for a specific deployment. The following configuration sets Knative Kafka kube-rbac-proxy minimum and maximum CPU and memory allocation: KnativeKafka CR example apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-kafka spec: config: workload: "kube-rbac-proxy-cpu-request": "10m" 1 "kube-rbac-proxy-memory-request": "20Mi" 2 "kube-rbac-proxy-cpu-limit": "100m" 3 "kube-rbac-proxy-memory-limit": "100Mi" 4 1 Sets minimum CPU allocation. 2 Sets minimum RAM allocation. 3 Sets maximum CPU allocation. 4 Sets maximum RAM allocation. | [
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-kafka spec: config: workload: \"kube-rbac-proxy-cpu-request\": \"10m\" 1 \"kube-rbac-proxy-memory-request\": \"20Mi\" 2 \"kube-rbac-proxy-cpu-limit\": \"100m\" 3 \"kube-rbac-proxy-memory-limit\": \"100Mi\" 4"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/kube-rbac-proxy-kafka |
Chapter 1. Introduction | Chapter 1. Introduction Security Enhanced Linux (SELinux) provides an additional layer of system security. SELinux fundamentally answers the question: "May <subject> do <action> to <object>", for example: "May a web server access files in users' home directories?". The standard access policy based on the user, group, and other permissions, known as Discretionary Access Control (DAC), does not enable system administrators to create comprehensive and fine-grained security policies, such as restricting specific applications to only viewing log files, while allowing other applications to append new data to the log files SELinux implements Mandatory Access Control (MAC). Every process and system resource has a special security label called a SELinux context . A SELinux context, sometimes referred to as a SELinux label , is an identifier which abstracts away the system-level details and focuses on the security properties of the entity. Not only does this provide a consistent way of referencing objects in the SELinux policy, but it also removes any ambiguity that can be found in other identification methods; for example, a file can have multiple valid path names on a system that makes use of bind mounts. The SELinux policy uses these contexts in a series of rules which define how processes can interact with each other and the various system resources. By default, the policy does not allow any interaction unless a rule explicitly grants access. Note It is important to remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first, which means that no SELinux denial is logged if the traditional DAC rules prevent the access. SELinux contexts have several fields: user, role, type, and security level. The SELinux type information is perhaps the most important when it comes to the SELinux policy, as the most common policy rule which defines the allowed interactions between processes and system resources uses SELinux types and not the full SELinux context. SELinux types usually end with _t . For example, the type name for the web server is httpd_t . The type context for files and directories normally found in /var/www/html/ is httpd_sys_content_t . The type contexts for files and directories normally found in /tmp and /var/tmp/ is tmp_t . The type context for web server ports is http_port_t . For example, there is a policy rule that permits Apache (the web server process running as httpd_t ) to access files and directories with a context normally found in /var/www/html/ and other web server directories ( httpd_sys_content_t ). There is no allow rule in the policy for files normally found in /tmp and /var/tmp/ , so access is not permitted. With SELinux, even if Apache is compromised, and a malicious script gains access, it is still not able to access the /tmp directory. Figure 1.1. SELinux allows the Apache process running as httpd_t to access the /var/www/html/ directory and it denies the same process to access the /data/mysql/ directory because there is no allow rule for the httpd_t and mysqld_db_t type contexts). On the other hand, the MariaDB process running as mysqld_t is able to access the /data/mysql/ directory and SELinux also correctly denies the process with the mysqld_t type to access the /var/www/html/ directory labeled as httpd_sys_content_t. Additional Resources For more information, see the following documentation: The selinux(8) man page and man pages listed by the apropos selinux command. Man pages listed by the man -k _selinux command when the selinux-policy-doc package is installed. See Section 11.3.3, "Manual Pages for Services" for more information. The SELinux Coloring Book SELinux Wiki FAQ 1.1. Benefits of running SELinux SELinux provides the following benefits: All processes and files are labeled. SELinux policy rules define how processes interact with files, as well as how processes interact with each other. Access is only allowed if an SELinux policy rule exists that specifically allows it. Fine-grained access control. Stepping beyond traditional UNIX permissions that are controlled at user discretion and based on Linux user and group IDs, SELinux access decisions are based on all available information, such as an SELinux user, role, type, and, optionally, a security level. SELinux policy is administratively-defined and enforced system-wide. Improved mitigation for privilege escalation attacks. Processes run in domains, and are therefore separated from each other. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker only has access to the normal functions of that process, and to files the process has been configured to have access to. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories, unless a specific SELinux policy rule was added or configured to allow such access. SELinux can be used to enforce data confidentiality and integrity, as well as protecting processes from untrusted inputs. However, SELinux is not: antivirus software, replacement for passwords, firewalls, and other security systems, all-in-one security solution. SELinux is designed to enhance existing security solutions, not replace them. Even when running SELinux, it is important to continue to follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords, or firewalls. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-introduction |
2.4. Configuring max_luns | 2.4. Configuring max_luns If RAID storage in your cluster presents multiple LUNs (Logical Unit Numbers), each cluster node must be able to access those LUNs. To enable access to all LUNs presented, configure max_luns in the /etc/modprobe.conf file of each node as follows: Open /etc/modprobe.conf with a text editor. Append the following line to /etc/modprobe.conf . Set N to the highest numbered LUN that is presented by RAID storage. For example, with the following line appended to the /etc/modprobe.conf file, a node can access LUNs numbered as high as 255: Save /etc/modprobe.conf . Run mkinitrd to rebuild initrd for the currently running kernel as follows. Set the kernel variable to the currently running kernel: For example, the currently running kernel in the following mkinitrd command is 2.6.9-34.0.2.EL: Note You can determine the currently running kernel by running uname -r . Restart the node. | [
"options scsi_mod max_luns= N",
"options scsi_mod max_luns=255",
"cd /boot mkinitrd -f -v initrd- kernel .img kernel",
"mkinitrd -f -v initrd-2.6.9-34.0.2.EL.img 2.6.9-34.0.2.EL"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-max-luns-CA |
Red Hat Quay API guide | Red Hat Quay API guide Red Hat Quay 3 Red Hat Quay API Guide Red Hat OpenShift Documentation Team | [
"FEATURE_ASSIGN_OAUTH_TOKEN: true",
"This will prompt user <username> to generate a token with the following permissions: repo:create",
"Token assigned successfully",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}",
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"<example_secret> }",
"BROWSER_API_CALLS_XHR_ONLY: false",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" 1 https://<quay-server.example.com>/api/v1/<example>/<endpoint>/ 2",
"*createAppToken* 1 Create a new app specific token for user. 2 *POST /api/v1/user/apptoken* 3 **Authorizations: **oauth2_implicit (**user:admin**) 4 Request body schema (application/json) *Path parameters* 5 Name: **title** Description: Friendly name to help identify the token. Schema: string *Responses* 6 |HTTP Code|Description |Schema |201 |Successful creation | |400 |Bad Request |<<_apierror,ApiError>> |401 |Session required |<<_apierror,ApiError>> |403 |Unauthorized access |<<_apierror,ApiError>> |404 |Not found |<<_apierror,ApiError>> |===",
"curl -X POST -H \"Authorization: Bearer <access_token>\" 1 -H \"Content-Type: application/json\" -d '{ \"title\": \"MyAppToken\" 2 }' \"http://quay-server.example.com/api/v1/user/apptoken\" 3",
"{\"token\": {\"uuid\": \"6b5aa827-cee5-4fbe-a434-4b7b8a245ca7\", \"title\": \"MyAppToken\", \"last_accessed\": null, \"created\": \"Wed, 08 Jan 2025 19:32:48 -0000\", \"expiration\": null, \"token_code\": \"K2YQB1YO0ABYV5OBUYOMF9MCUABN12Y608Q9RHFXBI8K7IE8TYCI4WEEXSVH1AXWKZCKGUVA57PSA8N48PWED9F27PXATFUVUD9QDNCE9GOT9Q8ACYPIN0HL\"}}",
"import requests 1 Hard-coded values API_BASE_URL = \"http://<quay-server.example.com>/api/v1\" 2 ACCESS_TOKEN = \"<access_token>\" 3 ORG_NAME = \"<organization_name>\" 4 def get_all_organization_applications(): url = f\"{API_BASE_URL}/organization/{ORG_NAME}/applications\" headers = { \"Authorization\": f\"Bearer {ACCESS_TOKEN}\" } response = requests.get(url, headers=headers) if response.status_code == 200: try: applications = response.json() # Print the raw response for debugging print(\"Raw response:\", applications) # Adjust parsing logic based on the response structure if isinstance(applications, dict) and 'applications' in applications: applications = applications['applications'] if isinstance(applications, list): print(\"Organization applications retrieved successfully:\") for app in applications: # Updated key from 'title' to 'name' print(f\"Name: {app['name']}, Client ID: {app['client_id']}\") return applications else: print(\"Unexpected response format.\") return [] except requests.exceptions.JSONDecodeError: print(\"Error decoding JSON response:\", response.text) return [] else: print(f\"Failed to retrieve applications. Status code: {response.status_code}, Response: {response.text}\") return [] def delete_organization_application(client_id): url = f\"{API_BASE_URL}/organization/{ORG_NAME}/applications/{client_id}\" headers = { \"Authorization\": f\"Bearer {ACCESS_TOKEN}\" } response = requests.delete(url, headers=headers) if response.status_code == 204: print(f\"Application {client_id} deleted successfully.\") else: print(f\"Failed to delete application {client_id}. Status code: {response.status_code}, Response: {response.text}\") def main(): applications = get_all_organization_applications() for app in applications: if app['name'] != \"<admin_token_app>\": <5> # Skip the \"admin-token-app\" delete_organization_application(app['client_id']) else: print(f\"Skipping deletion of application: {app['name']}\") Execute the main function main()",
"crontab -e",
"0 0 1 * * sudo python /path/to/prune_images.py >> /var/log/prune_images.log 2>&1",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"title\": \"MyAppToken\" }' \"http://quay-server.example.com/api/v1/user/apptoken\"",
"{\"token\": {\"uuid\": \"6b5aa827-cee5-4fbe-a434-4b7b8a245ca7\", \"title\": \"MyAppToken\", \"last_accessed\": null, \"created\": \"Wed, 08 Jan 2025 19:32:48 -0000\", \"expiration\": null, \"token_code\": \"K2YQB1YO0ABYV5OBUYOMF9MCUABN12Y608Q9RHFXBI8K7IE8TYCI4WEEXSVH1AXWKZCKGUVA57PSA8N48PWED9F27PXATFUVUD9QDNCE9GOT9Q8ACYPIN0HL\"}}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken\"",
"{\"tokens\": [{\"uuid\": \"6b5aa827-cee5-4fbe-a434-4b7b8a245ca7\", \"title\": \"MyAppToken\", \"last_accessed\": null, \"created\": \"Wed, 08 Jan 2025 19:32:48 -0000\", \"expiration\": null}], \"only_expiring\": null}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"{\"token\": {\"uuid\": \"6b5aa827-cee5-4fbe-a434-4b7b8a245ca7\", \"title\": \"MyAppToken\", \"last_accessed\": null, \"created\": \"Wed, 08 Jan 2025 19:32:48 -0000\", \"expiration\": null, \"token_code\": \"K2YQB1YO0ABYV5OBUYOMF9MCUABN12Y608Q9RHFXBI8K7IE8TYCI4WEEXSVH1AXWKZCKGUVA57PSA8N48PWED9F27PXATFUVUD9QDNCE9GOT9Q8ACYPIN0HL\"}}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/discovery?query=true\" -H \"Authorization: Bearer <access_token>\"",
"--- : \"Manage the tags of a repository.\"}, {\"name\": \"team\", \"description\": \"Create, list and manage an organization's teams.\"}, {\"name\": \"trigger\", \"description\": \"Create, list and manage build triggers.\"}, {\"name\": \"user\", \"description\": \"Manage the current user.\"}, {\"name\": \"userfiles\", \"description\": \"\"}]} ---",
"curl -X GET \"https://<quay-server.example.com>/api/v1/error/<error_type>\" -H \"Authorization: Bearer <access_token>\"",
"curl: (7) Failed to connect to quay-server.example.com port 443 after 0 ms: Couldn't connect to server",
"curl -X POST \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"message\": { \"content\": \"Hi\", \"media_type\": \"text/plain\", \"severity\": \"info\" } }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\"",
"{\"messages\": [{\"uuid\": \"ecababd4-3451-4458-b5db-801684137444\", \"content\": \"Hi\", \"severity\": \"info\", \"media_type\": \"text/plain\"}]}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/message/<uuid>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/user/aggregatelogs\"",
"{\"aggregated\": [{\"kind\": \"create_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"manifest_label_add\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"push_repo\", \"count\": 2, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"revert_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://quay-server.example.com/api/v1/user/logs?performer=quayuser&starttime=01/01/2024&endtime=06/18/2024\"",
"--- {\"start_time\": \"Mon, 01 Jan 2024 00:00:00 -0000\", \"end_time\": \"Wed, 19 Jun 2024 00:00:00 -0000\", \"logs\": [{\"kind\": \"revert_tag\", \"metadata\": {\"username\": \"quayuser\", \"repo\": \"busybox\", \"tag\": \"test-two\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"ip\": \"192.168.1.131\", \"datetime\": \"Tue, 18 Jun 2024 18:59:13 -0000\", \"performer\": {\"kind\": \"user\", \"name\": \"quayuser\", \"is_robot\": false, \"avatar\": {\"name\": \"quayuser\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, {\"kind\": \"push_repo\", \"metadata\": {\"repo\": \"busybox\", \"namespace\": \"quayuser\", \"user-agent\": \"containers/5.30.1 (github.com/containers/image)\", \"tag\": \"test-two\", \"username\": \"quayuser\", } ---",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"",
"{\"export_id\": \"6a0b9ea9-444c-4a19-9db8-113201c38cd4\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"labels\": [{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}, {\"id\": \"2d34ec64-4051-43ad-ae06-d5f81003576a\", \"key\": \"org.opencontainers.image.version\", \"value\": \"1.36.1-glibc\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>",
"{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"{\"label\": {\"id\": \"346593fd-18c8-49db-854f-4cb1fb76ff9c\", \"key\": \"example-key\", \"value\": \"example-value\", \"source_type\": \"api\", \"media_type\": \"text/plain\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <is_enabled>, \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\"",
"{\"is_enabled\": true, \"mirror_type\": \"PULL\", \"external_reference\": \"https://quay.io/repository/argoproj/argocd\", \"external_registry_username\": null, \"external_registry_config\": {}, \"sync_interval\": 86400, \"sync_start_date\": \"2025-01-15T12:00:00Z\", \"sync_expiration_date\": null, \"sync_retries_remaining\": 3, \"sync_status\": \"NEVER_RUN\", \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [\"*.latest*\"]}, \"robot_username\": \"quayadmin+mirror_robot\"}",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel\" \\",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <false>, 1 \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'",
"\"Created\"",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq",
"[{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\"",
"{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'",
"{\"id\": 1, \"limit_bytes\": 21474836480, \"limit\": \"20.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 21474836480, \"type\": \"Reject\", 1 \"threshold_percent\": 90 2 }'",
"\"Created\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit\" -H \"Authorization: Bearer <access_token>\"",
"[{\"id\": 2, \"type\": \"Reject\", \"limit_percent\": 90}]",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"type\": \"<type>\", \"threshold_percent\": <threshold_percent> }'",
"{\"id\": 3, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [{\"id\": 2, \"type\": \"Warning\", \"limit_percent\": 80}], \"default_config_exists\": false}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota\" -H \"Authorization: Bearer <access_token>\"",
"[{\"id\": 4, \"limit_bytes\": 2199023255552, \"limit\": \"2.0 TiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}\" -H \"Authorization: Bearer <access_token>\"",
"{\"id\": 4, \"limit_bytes\": 2199023255552, \"limit\": \"2.0 TiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit\" -H \"Authorization: Bearer <access_token>\"",
"[{\"id\": 3, \"type\": \"Reject\", \"limit_percent\": 100}]",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit/{limit_id}\" -H \"Authorization: Bearer <access_token>\"",
"{\"id\": 4, \"limit_bytes\": 2199023255552, \"limit\": \"2.0 TiB\", \"default_config\": false, \"limits\": [{\"id\": 3, \"type\": \"Reject\", \"limit_percent\": 100}], \"default_config_exists\": false}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"",
"\"Created\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<org_email>\", \"invoice_email\": <true/false>, \"invoice_email_address\": \"<billing_email>\" }'",
"{\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {\"owners\": {\"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false}}, \"ordered_teams\": [\"owners\"], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://<quay-server.example.com>/api/v1/error/not_found\", \"status\": 404}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members\" -H \"Authorization: Bearer <access_token>\"",
"{\"members\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"6d640d802fe23b93779b987c187a4b7a4d8fbcbd4febe7009bdff58d84498fba\", \"color\": \"#f7b6d2\", \"kind\": \"user\"}, \"teams\": [{\"name\": \"owners\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}}], \"repositories\": [\"testrepo\"]}, {\"name\": \"testuser\", \"kind\": \"user\", \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}, \"teams\": [{\"name\": \"owners\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}}], \"repositories\": []}]}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/{orgname}/collaborators\" -H \"Authorization: Bearer <access_token>\"",
"{\"collaborators\": [user-test]}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>\" -H \"Authorization: Bearer <access_token>\"",
"{\"name\": \"quayadmin\", \"kind\": \"user\", \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"6d640d802fe23b93779b987c187a4b7a4d8fbcbd4febe7009bdff58d84498fba\", \"color\": \"#f7b6d2\", \"kind\": \"user\"}, \"teams\": [{\"name\": \"owners\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}}], \"repositories\": [\"testrepo\"]}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<app_name>\", \"redirect_uri\": \"<redirect_uri>\", \"application_uri\": \"<application_uri>\", \"description\": \"<app_description>\", \"avatar_email\": \"<avatar_email>\" }'",
"{\"name\": \"new-application\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"E6GJSHOZMFBVNHTHNB53\", \"client_secret\": \"SANSWCWSGLVAUQ60L4Q4CEO3C1QAYGEXZK2VKJNI\", \"redirect_uri\": \"\", \"avatar_email\": null}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications\" -H \"Authorization: Bearer <access_token>\"",
"{\"applications\": [{\"name\": \"test\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"MCJ61D8KQBFS2DXM56S2\", \"client_secret\": \"J5G7CCX5QCA8Q5XZLWGI7USJPSM4M5MQHJED46CF\", \"redirect_uri\": \"\", \"avatar_email\": null}, {\"name\": \"new-token\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"IG58PX2REEY9O08IZFZE\", \"client_secret\": \"2LWTWO89KH26P2CO4TWFM7PGCX4V4SUZES2CIZMR\", \"redirect_uri\": \"\", \"avatar_email\": null}, {\"name\": \"second-token\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"6XBK7QY7ACSCN5XBM3GS\", \"client_secret\": \"AVKBOUXTFO3MXBBK5UJD5QCQRN2FWL3O0XPZZT78\", \"redirect_uri\": \"\", \"avatar_email\": null}, {\"name\": \"new-application\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"E6GJSHOZMFBVNHTHNB53\", \"client_secret\": \"SANSWCWSGLVAUQ60L4Q4CEO3C1QAYGEXZK2VKJNI\", \"redirect_uri\": \"\", \"avatar_email\": null}]}",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications/<client_id>\" -H \"Authorization: Bearer <access_token>\"",
"{\"name\": \"test\", \"description\": \"\", \"application_uri\": \"\", \"client_id\": \"MCJ61D8KQBFS2DXM56S2\", \"client_secret\": \"J5G7CCX5QCA8Q5XZLWGI7USJPSM4M5MQHJED46CF\", \"redirect_uri\": \"\", \"avatar_email\": null}",
"curl -X PUT \"https://quay-server.example.com/api/v1/organization/test/applications/12345\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"Updated Application Name\", \"redirect_uri\": \"https://example.com/oauth/callback\", \"application_uri\": \"https://example.com\", \"description\": \"Updated description for the application\", \"avatar_email\": \"[email protected]\" }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/app/<client_id>\" -H \"Authorization: Bearer <access_token>\"",
"{\"name\": \"new-application3\", \"description\": \"\", \"uri\": \"\", \"avatar\": {\"name\": \"new-application3\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"app\"}, \"organization\": {\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {}, \"ordered_teams\": [], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/{orgname}/applications/{client_id}\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/proxycache\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"upstream_registry\": \"<upstream_registry>\" }'",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/{orgname}/validateproxycache\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"upstream_registry\": \"<upstream_registry>\" }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache\" -H \"Authorization: Bearer <access_token>\"",
"{\"upstream_registry\": \"quay.io\", \"expiration_s\": 86400, \"insecure\": false}",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache\" -H \"Authorization: Bearer <access_token>\"",
"\"Deleted\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>\"",
"{\"role\": \"read\", \"name\": \"testuser\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}, \"is_org_member\": false}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/user/\"",
"{\"permissions\": {\"quayadmin\": {\"role\": \"admin\", \"name\": \"quayadmin\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"6d640d802fe23b93779b987c187a4b7a4d8fbcbd4febe7009bdff58d84498fba\", \"color\": \"#f7b6d2\", \"kind\": \"user\"}, \"is_org_member\": true}, \"test+example\": {\"role\": \"admin\", \"name\": \"test+example\", \"is_robot\": true, \"avatar\": {\"name\": \"test+example\", \"hash\": \"3b03050c26e900500437beee4f7f2a5855ca7e7c5eab4623a023ee613565a60e\", \"color\": \"#a1d99b\", \"kind\": \"robot\"}, \"is_org_member\": true}}}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>/transitive\"",
"{\"permissions\": [{\"role\": \"admin\"}]}",
"curl -X PUT -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"<role>\"}' \"https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>\"",
"{\"role\": \"admin\", \"name\": \"testuser\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}, \"is_org_member\": false}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/user/<username>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"{\"role\": \"write\"}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/\"",
"{\"permissions\": {\"ironmanteam\": {\"role\": \"read\", \"name\": \"ironmanteam\", \"avatar\": {\"name\": \"ironmanteam\", \"hash\": \"8045b2361613622183e87f33a7bfc54e100a41bca41094abb64320df29ef458d\", \"color\": \"#969696\", \"kind\": \"team\"}}, \"sillyteam\": {\"role\": \"read\", \"name\": \"sillyteam\", \"avatar\": {\"name\": \"sillyteam\", \"hash\": \"f275d39bdee2766d2404e2c6dbff28fe290969242e9fcf1ffb2cde36b83448ff\", \"color\": \"#17becf\", \"kind\": \"team\"}}}}",
"curl -X PUT -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"<role>\"}' \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"{\"role\": \"admin\", \"name\": \"superteam\", \"avatar\": {\"name\": \"superteam\", \"hash\": \"48cb6d114200039fed5c601480653ae7371d5a8849521d4c3bf2418ea013fc0f\", \"color\": \"#9467bd\", \"kind\": \"team\"}}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}",
"curl -X GET -H \"Authorization: Bearer <ACCESS_TOKEN>\" \"https://quay-server.example.com/api/v1/repository?public=true&starred=false&namespace=<NAMESPACE>\"",
"{\"repositories\": [{\"namespace\": \"quayadmin\", \"name\": \"busybox\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"MIRROR\", \"is_starred\": false, \"quota_report\": {\"quota_bytes\": 2280675, \"configured_quota\": 2199023255552}}]}",
"curl -X POST -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"visibility\": \"private\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPO_NAME>/changevisibility\"",
"{\"success\": true}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"description\": \"This is an updated description for the repository.\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPOSITORY>\"",
"{\"success\": true}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/",
"{\"uuid\": \"73d64f05-d587-42d9-af6d-e726a4a80d6e\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": <true> 1 }' \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/\"",
"{\"uuid\": \"ebf7448b-93c3-4f14-bf2f-25aa6857c7b0\"}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<uuid>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/",
"{\"policies\": [{\"uuid\": \"ebf7448b-93c3-4f14-bf2f-25aa6857c7b0\", \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}, {\"uuid\": \"da4d0ad7-3c2d-4be8-af63-9c51f9a501bc\", \"method\": \"number_of_tags\", \"value\": 10, \"tagPattern\": null, \"tagPatternMatches\": true}, {\"uuid\": \"17b9fd96-1537-4462-a830-7f53b43f94c2\", \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}]}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/73d64f05-d587-42d9-af6d-e726a4a80d6e",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/user/autoprunepolicy/",
"{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859",
"{\"policies\": [{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\", \"method\": \"number_of_tags\", \"value\": 10}]}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/user/autoprunepolicy/8c03f995-ca6f-4928-b98d-d75ed8c14859",
"{\"uuid\": \"8c03f995-ca6f-4928-b98d-d75ed8c14859\"}",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"creation_date\", \"value\": \"7d\"}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/",
"{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"<creation_date>\", \"value\": \"<7d>\", \"tagPattern\": \"<^test.>*\", \"tagPatternMatches\": <false> 1 }' \"https://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/\"",
"{\"uuid\": \"b53d8d3f-2e73-40e7-96ff-736d372cd5ef\"}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"number_of_tags\", \"value\": \"5\", \"tagPattern\": \"^test.*\", \"tagPatternMatches\": true }' \"https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/autoprunepolicy/<uuid>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7",
"{\"policies\": [{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\", \"method\": \"number_of_tags\", \"value\": 10}]}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7",
"{\"uuid\": \"ce2bdcc0-ced2-4a1a-ac36-78a9c1bed8c7\"}",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/",
"{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true }' \"http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/\"",
"{\"uuid\": \"b3797bcd-de72-4b71-9b1e-726dabc971be\"}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^test.\", \"tagPatternMatches\": true }' \"https://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/7726f79c-cbc7-490e-98dd-becdc6fefce7",
"{\"uuid\": \"81ee77ec-496a-4a0a-9241-eca49437d15b\", \"method\": \"creation_date\", \"value\": \"7d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true}",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/repository/<user_account>/<user_repository>/autoprunepolicy/<policy_uuid>",
"{\"uuid\": \"7726f79c-cbc7-490e-98dd-becdc6fefce7\"}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}",
"curl -X GET -H \"Authorization: Bearer <ACCESS_TOKEN>\" \"https://quay-server.example.com/api/v1/repository?public=true&starred=false&namespace=<NAMESPACE>\"",
"{\"repositories\": [{\"namespace\": \"quayadmin\", \"name\": \"busybox\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"MIRROR\", \"is_starred\": false, \"quota_report\": {\"quota_bytes\": 2280675, \"configured_quota\": 2199023255552}}]}",
"curl -X POST -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"visibility\": \"private\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPO_NAME>/changevisibility\"",
"{\"success\": true}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"description\": \"This is an updated description for the repository.\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPOSITORY>\"",
"{\"success\": true}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/",
"{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test",
"{}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification",
"{\"notifications\": []}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>\"",
"{\"name\": \"test+example\", \"created\": \"Mon, 25 Nov 2024 16:25:16 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"BILZ6YTVAZAKOGMD9270OKN3SOD9KPB7OLKEJQOJE38NBBRUJTIH7T5859DJL31Q\", \"unstructured_metadata\": {}}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>/permissions\"",
"{\"permissions\": [{\"repository\": {\"name\": \"testrepo\", \"is_public\": true}, \"role\": \"admin\"}]}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/user/robots/<ROBOT_SHORTNAME>\"",
"{\"name\": \"quayadmin+mirror_robot\", \"created\": \"Wed, 15 Jan 2025 17:22:09 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"QBFYWIWZOS1I0P0R9N1JRNP1UZAOPUIR3EB4ASPZKK9IA1SFC12LTEF7OJHB05Z8\", \"unstructured_metadata\": {}}",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/user/robots/<ROBOT_SHORTNAME>/permissions\"",
"{\"permissions\": [{\"repository\": {\"name\": \"busybox\", \"is_public\": false}, \"role\": \"write\"}]}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"{\"robots\": []}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"{\"message\":\"Could not find robot with specified username\"}",
"curl -X GET \"https://quay-server.example.com/api/v1/find/repositories?query=<repo_name>&page=1&includeUsage=true\" -H \"Authorization: Bearer <bearer_token>\"",
"{\"results\": [], \"has_additional\": false, \"page\": 2, \"page_size\": 10, \"start_index\": 10}",
"curl -X GET \"https://quay-server.example.com/api/v1/find/all?query=<mysearchterm>\" -H \"Authorization: Bearer <bearer_token>\"",
"{\"results\": [{\"kind\": \"repository\", \"title\": \"repo\", \"namespace\": {\"title\": \"user\", \"kind\": \"user\", \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"6d640d802fe23b93779b987c187a4b7a4d8fbcbd4febe7009bdff58d84498fba\", \"color\": \"#f7b6d2\", \"kind\": \"user\"}, \"name\": \"quayadmin\", \"score\": 1, \"href\": \"/user/quayadmin\"}, \"name\": \"busybox\", \"description\": null, \"is_public\": false, \"score\": 4.0, \"href\": \"/repository/quayadmin/busybox\"}]}",
"curl -X GET \"https://quay-server.example.com/api/v1/entities/<prefix>?includeOrgs=<true_or_false>&includeTeams=<true_or_false>&namespace=<namespace>\" -H \"Authorization: Bearer <bearer_token>\"",
"{\"results\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"6d640d802fe23b93779b987c187a4b7a4d8fbcbd4febe7009bdff58d84498fba\", \"color\": \"#f7b6d2\", \"kind\": \"user\"}}]}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"{\"name\": \"testuser\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"d51d17303dc3271ac3266fb332d7df919bab882bbfc7199d2017a4daac8979f0\", \"color\": \"#5254a3\", \"kind\": \"user\"}, \"invited\": false}",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"",
"{\"name\": \"owners\", \"members\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"invited\": false}, {\"name\": \"test-org+test\", \"kind\": \"user\", \"is_robot\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}, \"invited\": false}], \"can_edit\": true}",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"title\": \"MyAppToken\" }' \"http://quay-server.example.com/api/v1/user/apptoken\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/discovery?query=true\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/error/<error_type>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"message\": { \"content\": \"Hi\", \"media_type\": \"text/plain\", \"severity\": \"info\" } }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\"",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/message/<uuid>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel\" \\",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <false>, 1 \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <is_enabled>, \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<public>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/{username}\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/organizations/\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"If the user is merely invited to join the team, then the invite is removed instead.",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"",
"{ \"is_enabled\": True, \"external_reference\": \"quay.io/redhat/quay\", \"sync_interval\": 5000, \"sync_start_date\": datetime(2020, 0o1, 0o2, 6, 30, 0), \"external_registry_username\": \"fakeUsername\", \"external_registry_password\": \"fakePassword\", \"external_registry_config\": { \"verify_tls\": True, \"unsigned_images\": False, \"proxy\": { \"http_proxy\": \"http://insecure.proxy.corp\", \"https_proxy\": \"https://secure.proxy.corp\", \"no_proxy\": \"mylocalhost\", }, }, }",
"{ \"root_rule\": {\"rule_kind\": \"tag_glob_csv\", \"rule_value\": [\"latest\", \"foo\", \"bar\"]}, }"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html-single/red_hat_quay_api_guide/index |
Chapter 1. Introduction | Chapter 1. Introduction Welcome to the Ceph Object Gateway for Production guide. This guide covers topics for building Ceph Storage clusters and Ceph Object Gateway clusters for production use. 1.1. Audience This guide is for those who intend to deploy a Ceph Object Gateway environment for production. It provides a sequential series of topics for planning, designing and deploying a production Ceph Storage cluster and Ceph Object Gateway cluster with links to general Ceph documentation where appropriate. 1.2. Assumptions This guide assumes the reader has a basic understanding of the Ceph Storage Cluster and the Ceph Object Gateway. Readers with no Ceph experience should consider setting up a small Ceph test environment or using the Ceph Sandbox Environment to get familiar with Ceph concepts before proceeding with this guide. This guide assumes a single-site cluster consisting of a single Ceph Storage cluster and multiple Ceph Object Gateway instances in the same zone. This guide assumes the single-site cluster will expand to a multi-zone and multi-site cluster by repeating the procedures in this guide for each zone group and zone with the naming modifications required for secondary zone groups and zones. 1.3. Scope This guide covers the following topics when setting up a Ceph Storage Cluster and a Ceph Object Gateway for production: Planning a Cluster Considering Hardware Configuring a Cluster Deploying Ceph Developing Storage Strategies Configuring Gateways Additional Use Cases Note This document is intended to complement the hardware, installation, administration and Ceph Object Gateway guides. This guide does not replace the other guides. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/assembly-introduction-rgw-prod |
Chapter 8. Asynchronous errata updates | Chapter 8. Asynchronous errata updates 8.1. RHBA-2024:4538 OpenShift Data Foundation 4.15.5 bug fixes and security updates OpenShift Data Foundation release 4.15.5 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:4538 advisory. 8.2. RHBA-2024:3806 OpenShift Data Foundation 4.15.3 bug fixes and security updates OpenShift Data Foundation release 4.15.3 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:3806 advisory. 8.3. RHBA-2024:2636 OpenShift Data Foundation 4.15.2 bug fixes and security updates OpenShift Data Foundation release 4.15.2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:2636 advisory. 8.4. RHBA-2024:1708 OpenShift Data Foundation 4.15.1 bug fixes and security updates OpenShift Data Foundation release 4.15.1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1708 advisory. 8.4.1. Documentation updates Added new sections on related to hub recovery in Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide. The section covers how to configure the 4th cluster for hub recovery to failover or relocate the disaster recovery protected workloads using Red Hat Advanced Cluster Management for Kubernetes (RHACM) in case where the active hub is down or unreachable. The hub recovery solution is a Technology Preview feature and is subject to Technology Preview support limitations. For more information, see Hub recovery support for co-situated and neutral site deployments . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/4.15_release_notes/asynchronous_errata_updates |
Chapter 6. Enhancements | Chapter 6. Enhancements Streams for Apache Kafka 2.9 adds a number of enhancements. 6.1. Kafka 3.9.0 enhancements For an overview of the enhancements introduced with Kafka 3.9.0, refer to the Kafka 3.9.0 Release Notes. 6.2. Streams for Apache Kafka 6.2.1. Configuration mechanism for quotas management The Strimzi Quotas plugin moves to GA (General Availability). Use the plugin properties to set throughput and storage limits on brokers in your Kafka cluster configuration. Warning If you have previously used the Strimzi Quotas plugin in releases prior to Streams for Apache Kafka 2.8, update your Kafka cluster configuration to use the latest .spec.kafka.quotas properties to avoid reconciliation issues when upgrading. For more information, see Setting limits on brokers using the Kafka Static Quota plugin . 6.2.2. Change to unmanaged topic reconciliation When finalizers are enabled (default), the Topic Operator no longer restores them on unmanaged KafkaTopic resources if removed. This behavior aligns with paused topics, where finalizers are also not restored. 6.2.3. ContinueReconciliationOnManualRollingUpdateFailure feature gate The technology preview of the ContinueReconciliationOnManualRollingUpdateFailure feature gate moves to beta stage and is enabled by default. If required, ContinueReconciliationOnManualRollingUpdateFailure can be disabled in the feature gates configuration in the Cluster Operator. 6.2.4. Rolling pods once for CA renewal Pods are now rolled only when the cluster CA key is replaced, not when the clients CA key is replaced, which is used solely for trust. Consequently, the restart event reason ClientCaCertKeyReplaced has been removed, and either CaCertRenewed or CaCertHasOldGeneration is now used as the event reason. 6.2.5. Rolling updates for CA certificates resume after interruption Rolling updates for new CA certificate generations now resume from where they left off after an interruption, instead of restarting the process and rolling all pods again. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/enhancements-str |
Deploying Red Hat Satellite on Amazon Web Services | Deploying Red Hat Satellite on Amazon Web Services Red Hat Satellite 6.16 Deploy Satellite Server and Capsule on Amazon Web Services Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/deploying_red_hat_satellite_on_amazon_web_services/index |
Chapter 31. Working with Kernel Modules | Chapter 31. Working with Kernel Modules The Linux kernel is modular, which means it can extend its capabilities through the use of dynamically-loaded kernel modules . A kernel module can provide: a device driver which adds support for new hardware; or, support for a file system such as btrfs or NFS . Like the kernel itself, modules can take parameters that customize their behavior, though the default parameters work well in most cases. User-space tools can list the modules currently loaded into a running kernel; query all available modules for available parameters and module-specific information; and load or unload (remove) modules dynamically into or from a running kernel. Many of these utilities, which are provided by the module-init-tools package, take module dependencies into account when performing operations so that manual dependency-tracking is rarely necessary. On modern systems, kernel modules are automatically loaded by various mechanisms when the conditions call for it. However, there are occasions when it is necessary to load and/or unload modules manually, such as when a module provides optional functionality, one module should be preferred over another although either could provide basic functionality, or when a module is misbehaving, among other situations. This chapter explains how to: use the user-space module-init-tools package to display, query, load and unload kernel modules and their dependencies; set module parameters both dynamically on the command line and permanently so that you can customize the behavior of your kernel modules; and, load modules at boot time. Note In order to use the kernel module utilities described in this chapter, first ensure the module-init-tools package is installed on your system by running, as root: For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . 31.1. Listing Currently-Loaded Modules You can list all kernel modules that are currently loaded into the kernel by running the lsmod command: Each row of lsmod output specifies: the name of a kernel module currently loaded in memory; the amount of memory it uses; and, the sum total of processes that are using the module and other modules which depend on it, followed by a list of the names of those modules, if there are any. Using this list, you can first unload all the modules depending the module you want to unload. For more information, see Section 31.4, "Unloading a Module" . Finally, note that lsmod output is less verbose and considerably easier to read than the content of the /proc/modules pseudo-file. | [
"~]# yum install module-init-tools",
"~]USD lsmod Module Size Used by xfs 803635 1 exportfs 3424 1 xfs vfat 8216 1 fat 43410 1 vfat tun 13014 2 fuse 54749 2 ip6table_filter 2743 0 ip6_tables 16558 1 ip6table_filter ebtable_nat 1895 0 ebtables 15186 1 ebtable_nat ipt_MASQUERADE 2208 6 iptable_nat 5420 1 nf_nat 19059 2 ipt_MASQUERADE,iptable_nat rfcomm 65122 4 ipv6 267017 33 sco 16204 2 bridge 45753 0 stp 1887 1 bridge llc 4557 2 bridge,stp bnep 15121 2 l2cap 45185 16 rfcomm,bnep cpufreq_ondemand 8420 2 acpi_cpufreq 7493 1 freq_table 3851 2 cpufreq_ondemand,acpi_cpufreq usb_storage 44536 1 sha256_generic 10023 2 aes_x86_64 7654 5 aes_generic 27012 1 aes_x86_64 cbc 2793 1 dm_crypt 10930 1 kvm_intel 40311 0 kvm 253162 1 kvm_intel [output truncated]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Working_with_Kernel_Modules |
Deploying installer-provisioned clusters on bare metal | Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.15 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/index |
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE | Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.15, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 4.3.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 4.3.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 4.3.3. IBM Z network connectivity requirements To install on IBM Z(R) under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 4.3.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 4.3.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 4.3.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 4.3.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM(R) Documentation. 4.3.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 4.3.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 4.3.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.3.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.3.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Note The RHEL KVM host must be configured to use bridged networking in libvirt or MacVTap to connect the network to the virtual machines. The virtual machines must have access to the network, which is attached to the RHEL KVM host. Virtual Networks, for example network address translation (NAT), within KVM are not a supported configuration. Table 4.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.3.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.3.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 4.3.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 4.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 4.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 4.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.9. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.10. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 4.11. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 4.12. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 4.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 4.15. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 4.16. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 4.17. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 4.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 4.12.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 4.12.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 1 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 4.12.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vm_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --launchSecurity type="s390-pv" \ 1 --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 4.12.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vm_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vm_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 4.12.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 4.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 1 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-kvm |
Chapter 5. Managing Red Hat Subscriptions | Chapter 5. Managing Red Hat Subscriptions Red Hat Satellite can import content from the Red Hat Content Delivery Network (CDN). Satellite requires a Red Hat subscription manifest to find, access, and download content from the corresponding repositories. You must have a Red Hat subscription manifest containing a subscription allocation for each organization on Satellite Server. All subscription information is available in your Red Hat Customer Portal account. Before you can complete the tasks in this chapter, you must create a Red Hat subscription manifest in the Customer Portal. Note that the entitlement-based subscription model is deprecated and will be removed in a future release. Red Hat recommends that you use the access-based subscription model of Simple Content Access instead. To create, manage, and export a Red Hat subscription manifest in the Customer Portal, see Using Manifests in the Using Red Hat Subscription Management guide. Use this chapter to import a Red Hat subscription manifest and manage the manifest within the Satellite web UI. Subscription Allocations and Organizations You can manage more than one organization if you have more than one subscription allocation. Satellite requires a single allocation for each organization configured in Satellite Server. The advantage of this is that each organization maintains separate subscriptions so that you can support multiple organizations, each with their own Red Hat accounts. Future-Dated subscriptions You can use future-dated subscriptions in a subscription allocation. When you add future-dated subscriptions to content hosts before the expiry date of the existing subscriptions, you can have uninterrupted access to repositories. Manually attach the future-dated subscriptions to your content hosts before the current subscriptions expire. Do not rely on the auto-attach method because this method is designed for a different purpose and might not work. For more information, see Section 5.6, "Attaching Red Hat Subscriptions to Content Hosts" . 5.1. Importing a Red Hat Subscription Manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Prerequisites You must have a Red Hat subscription manifest file exported from the Customer Portal. For more information, see Creating and Managing Manifests in Using Red Hat Subscription Management . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Browse . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window. CLI procedure Copy the Red Hat subscription manifest file from your client to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide. 5.2. Locating a Red Hat Subscription When you import a Red Hat subscription manifest into Satellite Server, the subscriptions from your manifest are listed in the Subscriptions window. If you have a high volume of subscriptions, you can filter the results to find a specific subscription. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click the Search field to view the list of search criteria for building your search query. Select search criteria to display further options. When you have built your search query, click the search icon. For example, if you place your cursor in the Search field and select expires , then press the space bar, another list appears with the options of placing a > , < , or = character. If you select > and press the space bar, another list of automatic options appears. You can also enter your own criteria. 5.3. Adding Red Hat Subscriptions to Subscription Allocations Use the following procedure to add Red Hat subscriptions to a subscription allocation in the Satellite web UI. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Add Subscriptions . On the row of each subscription you want to add, enter the quantity in the Quantity to Allocate column. Click Submit 5.4. Removing Red Hat Subscriptions from Subscription Allocations Use the following procedure to remove Red Hat subscriptions from a subscription allocation in the Satellite web UI. Note Manifests must not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the Satellite web UI, all of the entitlements for all of your content hosts will be removed. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 5.1, "Importing a Red Hat Subscription Manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . On the row of each subscription you want to remove, select the corresponding checkbox. Click Delete , and then confirm deletion. 5.5. Updating and Refreshing Red Hat Subscription Manifests Every time that you change a subscription allocation, you must refresh the manifest to reflect these changes. For example, you must refresh the manifest if you take any of the following actions: Renewing a subscription Adjusting subscription quantities Purchasing additional subscriptions You can refresh the manifest directly in the Satellite web UI. Alternatively, you can import an updated manifest that contains the changes. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Manage Manifest . In the Manage Manifest window, click Refresh . 5.6. Attaching Red Hat Subscriptions to Content Hosts Using activation keys is the main method to attach subscriptions to content hosts during the provisioning process. However, an activation key cannot update an existing host. If you need to attach new or additional subscriptions, such as future-dated subscriptions, to one host, use the following procedure. For more information about updating multiple hosts, see Section 5.7, "Updating Red Hat Subscriptions on Multiple Hosts" . For more information about activation keys, see Chapter 10, Managing Activation Keys . Satellite Subscriptions In Satellite, you must maintain a Red Hat Enterprise Linux Satellite subscription, formerly known as Red Hat Enterprise Linux Smart Management, for every Red Hat Enterprise Linux host that you want to manage. However, you are not required to attach Satellite subscriptions to each content host. Satellite subscriptions cannot attach automatically to content hosts in Satellite because they are not associated with any product certificates. Adding a Satellite subscription to a content host does not provide any content or repository access. If you want, you can add a Satellite subscription to a manifest for your own recording or tracking purposes. Prerequisite You must have a Red Hat subscription manifest file imported to Satellite Server. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Hosts > Content Hosts . On the row of each content host whose subscription you want to change, select the corresponding checkbox. From the Select Action list, select Manage Subscriptions . Optionally, enter a key and value in the Search field to filter the subscriptions displayed. Select the checkbox to the left of the subscriptions that you want to add or remove and click Add Selected or Remove Selected as required. Click Done to save the changes. CLI procedure Connect to Satellite Server as the root user, and then list the available subscriptions: Attach a subscription to the host: 5.7. Updating Red Hat Subscriptions on Multiple Hosts Use this procedure for post-installation changes to multiple content hosts at the same time. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Hosts > Content Hosts . On the row of each content host whose subscription you want to change, select the corresponding checkbox. From the Select Action list, select Manage Subscriptions . Optionally, enter a key and value in the Search field to filter the subscriptions displayed. Select the checkbox to the left of the subscriptions to be added or removed and click Add Selected or Remove Selected as required. Click Done to save the changes. | [
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"hammer subscription list --organization-id 1",
"hammer host subscription attach --host host_name --subscription-id subscription_id"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Managing_Red_Hat_Subscriptions_content-management |
Chapter 11. Configuring the audit log policy | Chapter 11. Configuring the audit log policy You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use. 11.1. About audit log policy profiles Audit log profiles define how to log requests that come to the OpenShift API server, Kubernetes API server, OpenShift OAuth API server, and OpenShift OAuth server. OpenShift Container Platform provides the following predefined audit policy profiles: Profile Description Default Logs only metadata for read and write requests; does not log request bodies except for OAuth access token requests. This is the default policy. WriteRequestBodies In addition to logging metadata for all requests, logs request bodies for every write request to the API servers ( create , update , patch , delete , deletecollection ). This profile has more resource overhead than the Default profile. [1] AllRequestBodies In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers ( get , list , create , update , patch ). This profile has the most resource overhead. [1] None No requests are logged; even OAuth access token requests and OAuth authorize token requests are not logged. Custom rules are ignored when this profile is set. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Sensitive resources, such as Secret , Route , and OAuthClient objects, are only ever logged at the metadata level. OpenShift OAuth server events are only ever logged at the metadata level. By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O). 11.2. Configuring the audit log policy You can configure the audit log policy to use when logging requests that come to the API servers. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Update the spec.audit.profile field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: WriteRequestBodies 1 1 Set to Default , WriteRequestBodies , AllRequestBodies , or None . The default profile is Default . Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 11.3. Configuring the audit log policy with custom rules You can configure an audit log policy that defines custom rules. You can specify multiple groups and define which profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. Important Custom rules are ignored if the top-level profile field is set to None . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Add the spec.audit.customRules field: apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2 1 Add one or more groups and specify the profile to use for that group. These custom rules take precedence over the top-level profile field. The custom rules are evaluated from top to bottom, and the first that matches is applied. 2 Set to Default , WriteRequestBodies , or AllRequestBodies . If you do not set this top-level profile field, it defaults to the Default profile. Warning Do not set the top-level profile field to None if you want to use custom rules. Custom rules are ignored if the top-level profile field is set to None . Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 11.4. Disabling audit logging You can disable audit logging for OpenShift Container Platform. When you disable audit logging, even OAuth access token requests and OAuth authorize token requests are not logged. Warning It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver cluster Set the spec.audit.profile field to None : apiVersion: config.openshift.io/v1 kind: APIServer metadata: ... spec: audit: profile: None Note You can also disable audit logging only for specific groups by specifying custom rules in the spec.audit.customRules field. Save the file to apply the changes. Verification Verify that a new revision of the Kubernetes API server pods is rolled out. It can take several minutes for all nodes to update to the new revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 | [
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1",
"oc edit apiserver cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 12 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/audit-log-policy-config |
Chapter 1. Introduction to fapolicyd | Chapter 1. Introduction to fapolicyd The fapolicyd software framework controls the execution of applications based on a user-defined policy. This is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system. For more information, refer to Blocking and allowing applications by using fapolicyd in the Security hardening guide for RHEL 9. Note The procedures described below put all detected SAP HANA executables into fapolicyd trust files, which contain all names, sizes, and checksums of trusted files. SAP HANA binaries and shell scripts can only be executed if they are contained in the fapolicyd trust files. So, if you execute SAP HANA binaries or shell scripts that are not contained in the fapolicyd trust files, undesirable effects, including corruption or loss of data, could happen. You must carefully test all the steps and do proper verification on a non-production system first. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_fapolicyd_to_allow_only_sap_hana_executables/asmb_intro_configuring-fapolicyd |
Chapter 34. Adding rows and defining rules in guided decision tables | Chapter 34. Adding rows and defining rules in guided decision tables After you have created your columns in the guided decision table, you can add rows and define rules within the guided decision tables designer. Prerequisites Columns for the guided decision table have been added as described in Chapter 29, Adding columns to guided decision tables . Procedure In the guided decision tables designer, click Insert Append row or one of the Insert row options. (You can also click Insert column to open the column wizard and define a new column.) Figure 34.1. Add Rows Double-click each cell and enter data. For cells with specified values, select from the cell drop-down options. Figure 34.2. Enter input data in each cell After you define all rows of data in the guided decision table, click Validate in the upper-right toolbar of the guided decision tables designer to validate the table. If the table validation fails, address any problems described in the error message, review all components in the table, and try again to validate the table until the table passes. Note Although guided decision tables have real-time verification and validation, you should still manually validate the completed decision table to ensure optimal results. Click Save in the table designer to save your changes. After you define your guided decision table contents, in the upper-right corner of the guided decision tables designer, you can use the search bar if needed to search for text that appears in your guided decision table. The search feature is especially helpful in complex guided decision tables with many values: Figure 34.3. Search guided decision table contents | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/guided-decision-tables-rows-create-proc |
Preface | Preface To get started with Fuse, you need to download and install the files for your JBoss EAP container. The information and instructions here guide you in installing, developing, and building your first Fuse application. Chapter 1, Getting started with Fuse on JBoss EAP Chapter 2, Setting up Maven locally | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_jboss_eap/pr01 |
Chapter 1. Preparing for bare metal cluster installation | Chapter 1. Preparing for bare metal cluster installation 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization Getting started with OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 1.3. Choosing a method to install OpenShift Container Platform on bare metal The OpenShift Container Platform installation program offers four methods for deploying a cluster: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is the recommended approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the agent-based installer for air-gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the agent-based installer first. Configuration is done with a commandline interface. This approach is ideal for air-gapped or restricted networks. Automated : You can deploy a cluster on installer-provisioned infrastructure and the cluster it maintains. The installer uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters with both connected or air-gapped or restricted networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters with both connected or air-gapped or restricted networks. The clusters have the following characteristics: Highly available infrastructure with no single points of failure is available by default. Administrators maintain control over what updates are applied and when. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on bare metal infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal by using installer provisioning. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on bare metal infrastructure that you provision, by using one of the following methods: Installing a user-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal infrastructure that you provision. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. Installing a user-provisioned bare metal cluster with network customizations : You can install a bare metal cluster on user-provisioned infrastructure with network-customizations. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Most of the network customizations must be applied at the installation stage. Installing a user-provisioned bare metal cluster on a restricted network : You can install a user-provisioned bare metal cluster on a restricted or disconnected network by using a mirror registry. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_bare_metal/preparing-to-install-on-bare-metal |
A.10. turbostat | A.10. turbostat The turbostat tool provides detailed information about the amount of time that the system spends in different states. Turbostat is provided by the kernel-tools package. By default, turbostat prints a summary of counter results for the entire system, followed by counter results every 5 seconds, under the following headings: pkg The processor package number. core The processor core number. CPU The Linux CPU (logical processor) number. %c0 The percentage of the interval for which the CPU retired instructions. GHz When this number is higher than the value in TSC, the CPU is in turbo mode TSC The average clock speed over the course of the entire interval. %c1, %c3, and %c6 The percentage of the interval for which the processor was in the c1, c3, or c6 state, respectively. %pc3 or %pc6 The percentage of the interval for which the processor was in the pc3 or pc6 state, respectively. Specify a different period between counter results with the -i option, for example, run turbostat -i 10 to print results every 10 seconds instead. Note Upcoming Intel processors may add additional c-states. As of Red Hat Enterprise Linux 7.0, turbostat provides support for the c7, c8, c9, and c10 states. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-turbostat |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/pr01 |
Chapter 1. Overview | Chapter 1. Overview 1.1. Major changes in RHEL 9.3 Installer and image creation Key highlights for image builder: Enhancement to the AWS EC2 AMD or Intel 64-bit architecture AMI image to support UEFI boot, in addition to the legacy BIOS boot. For more information, see New features - Installer and image creation . 1.1.1. Bootloader New default behavior of grub2-mkconfig with BLS With this release, the grub2-mkconfig command no longer overwrites the kernel command line in Boot Loader Specification (BLS) snippets with GRUB_CMDLINE_LINUX by default. Each kernel in the boot loader menu takes its kernel command line from its BLS snippet. This new default behavior is caused by the GRUB_ENABLE_BLSCFG=true option. For details, see New features in Bootloader . RHEL for Edge Key highlights for RHEL for Edge: Support added to the following image types: minimal-raw edge-vsphere edge-ami New FIDO Device Onboarding Servers container images available rhel9/fdo-manufacturing-server rhel9/fdo-owner-onboarding-server rhel9/fdo-rendezvous-server rhel9/fdo-serviceinfo-api-server For more information, see New features - RHEL for Edge . Security Key security-related highlights: Keylime was rebased to version 7.3.0. The keylime RHEL System Role is available. With this role, you can more easily configure the Keylime verifier and Keylime registrar. OpenSSH was migrated further from the less secure SHA-1 message digest for cryptographic purposes, and instead applies the more secure SHA-2 in additional scenarios. The pcsc-lite-ccid USB Chip/Smart Card Interface Device(CCID)) and Integrated Circuit Card Device (ICCD) driver was rebased to version 1.5.2. RHEL 9.3 introduces further improvements to support the Extended Master Secret (EMS) extension (RFC 7627) required by the FIPS-140-3 standard for all TLS 1.2 connections. SEtools , the collection of graphical tools, command-line tools, and libraries for SELinux policy analysis, was rebased to version 4.4.3. OpenSCAP was rebased to version 1.3.8. SCAP Security Guide was rebased to version 0.1.69, most notably: ANSSI profiles were updated to version 2.0. Three new SCAP profiles were added for RHEL 9 aligned with the CCN-STIC-610A22 Guide. See New features - Security for more information. Dynamic programming languages, web and database servers Later versions of the following Application Streams are now available: Redis 7 Node.js 20 In addition, the Apache HTTP Server has been updated to version 2.4.57. See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools Updated system toolchain The following system toolchain component has been updated in RHEL 9.3: GCC 11.4.1 Updated performance tools and debuggers The following performance tools and debuggers have been updated in RHEL 9.3: Valgrind 3.21 SystemTap 4.9 elfutils 0.189 Updated performance monitoring tools The following performance monitoring tools have been updated in RHEL 9.3: PCP 6.0.5 Grafana 9.2.10 Updated compiler toolsets The following compiler toolsets have been updated in RHEL 9.3: GCC Toolset 13 (new) LLVM Toolset 16.0.6 Rust Toolset 1.71.1 Go Toolset 1.20.10 For detailed changes, see New features - Compilers and development tools . Java implementations in RHEL 9 The RHEL 9 AppStream repository includes: The java-21-openjdk packages, which provide the OpenJDK 21 Java Runtime Environment and the OpenJDK 21 Java Software Development Kit. An OpenJDK 21.0.1 security release is also available to install. It is recommended that you install the OpenJDK 21.0.1 update to acquire the latest security fixes. The java-17-openjdk packages, which provide the OpenJDK 17 Java Runtime Environment and the OpenJDK 17 Java Software Development Kit. The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and the OpenJDK 11 Java Software Development Kit. The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment and the OpenJDK 8 Java Software Development Kit. The Red Hat build of OpenJDK packages share a single set of binaries between its portable Linux releases and RHEL 9.3 and later releases. With this update, there is a change in the process of rebuilding the OpenJDK packages on RHEL from the source RPM. For more information about the new rebuilding process, see the README.md file which is available in the SRPM package of the Red Hat build of OpenJDK and is also installed by the java-*-openjdk-headless packages under the /usr/share/doc tree. For more information, see OpenJDK documentation . 1.2. In-place upgrade In-place upgrade from RHEL 8 to RHEL 9 The supported in-place upgrade paths currently are: From RHEL 8.6 to RHEL 9.0, RHEL 8.8 to RHEL 9.2, and RHEL 8.9 to RHEL 9.3 on the following architectures: 64-bit Intel 64-bit AMD 64-bit ARM IBM POWER 9 (little endian) IBM Z architectures, excluding z13 From RHEL 8.6 to RHEL 9.0 and RHEL 8.8 to RHEL 9.2 on systems with SAP HANA For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 8 to RHEL 9 . If you are upgrading to RHEL 9.2 with SAP HANA, ensure that the system is certified for SAP before the upgrade. For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 8 to RHEL 9 . Notable enhancements include: Requirements on disk space have been significantly reduced on systems with XFS filesystems formatted with ftype=0 . Disk images created during the upgrade process for upgrade purposes now have dynamic sizes. The LEAPP_OVL_SIZE environment variable is not needed anymore. Issues with the calculation of the required free space on existing disk partitions have been fixed. The missing free disk space is now correctly detected before the required reboot of the system, and the report correctly displays file systems that do not have enough free space to proceed the upgrade RPM transaction. Third-party drivers can now be managed during the in-place upgrade process using custom leapp actors. An overview of the pre-upgrade and upgrade reports is now printed in the terminal. Upgrades of RHEL Real Time and RHEL Real Time for Network Functions Virtualization (NFV) in Red Hat OpenStack Platform are now supported. In-place upgrade from RHEL 7 to RHEL 9 It is not possible to perform an in-place upgrade directly from RHEL 7 to RHEL 9. However, you can perform an in-place upgrade from RHEL 7 to RHEL 8 and then perform a second in-place upgrade to RHEL 9. For more information, see Upgrading from RHEL 7 to RHEL 8 . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Kickstart Generator Red Hat Product Certificates Red Hat CVE Checker Kernel Oops Analyzer Red Hat Code Browser VNC Configurator Red Hat OpenShift Container Platform Update Graph Red Hat Satellite Upgrade Helper JVM Options Configuration Tool Load Balancer Configuration Tool Red Hat OpenShift Data Foundation Supportability and Interoperability Checker Ansible Automation Platform Upgrade Assistant Ceph Placement Groups (PGs) per Pool Calculator Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 9, including licenses and application compatibility levels. Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. Major differences between RHEL 8 and RHEL 9 , including removed functionality, are documented in Considerations in adopting RHEL 9 . Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 are provided by the document Upgrading from RHEL 8 to RHEL 9 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page. Note Public release notes include links to access the original tracking tickets, but private release notes are not viewable so do not include links. [1] [1] Public release notes include links to access the original tracking tickets, but private release notes are not viewable so do not include links. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/overview |
12.5. Samba Configuration | 12.5. Samba Configuration The Samba configuration file smb.conf is located at /etc/samba/smb.conf in this example. It contains the following parameters: This example exports a share with name csmb located at /mnt/gfs2/share . This is different from the GFS2 shared filesystem at /mnt/ctdb/.ctdb.lock that we specified as the CTDB_RECOVERY_LOCK parameter in the CTDB configuration file at /etc/sysconfig/ctdb . In this example, we will create the share directory in /mnt/gfs2 when we mount it for the first time. The clustering = yes entry instructs Samba to use CTDB. The netbios name = csmb-server entry explicitly sets all the nodes to have a common NetBIOS name. The ea support parameter is required if you plan to use extended attributes. The smb.conf configuration file must be identical on all of the cluster nodes. Samba also offers registry-based configuration using the net conf command to automatically keep configuration in sync between cluster members without having to manually copy configuration files among the cluster nodes. For information on the net conf command, see the net (8) man page. | [
"[global] guest ok = yes clustering = yes netbios name = csmb-server [csmb] comment = Clustered Samba public = yes path = /mnt/gfs2/share writeable = yes ea support = yes"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-samba-configuration-ca |
Chapter 2. Configuring a GCP project | Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin The following roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.disks.setLabels compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.create iam.roles.get iam.roles.update Example 2.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 2.12. Optional Images permissions for installation compute.images.list Example 2.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.15. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 2.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 2.21. Required Images permissions for deletion compute.images.list 2.5.3. Required GCP permissions for shared VPC installations When you are installing a cluster to a shared VPC , you must configure the service account for both the host project and the service project. If you are not installing to a shared VPC, you can skip this section. You must apply the minimum roles required for a standard installation as listed above, to the service project. Important You can use granular permissions for a Cloud Credential Operator that operates in either manual or mint credentials mode. You cannot use granular permissions in passthrough credentials mode. Ensure that the host project applies one of the following configurations to the service account: Example 2.22. Required permissions for creating firewalls in the host project projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkAdmin roles/compute.securityAdmin Example 2.23. Required minimal permissions projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkUser 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-gcp-account |
5.302. slapi-nis | 5.302. slapi-nis 5.302.1. RHBA-2012:0821 - slapi-nis bug fix and enhancement update Updated slapi-nis packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The slapi-nis packages contain the NIS server plug-in and the Schema Compatibility plug-in for use with the 389 directory server. The slapi-nis packages have been upgraded to upstream version 0.40, which provides a number of bug fixes and enhancements over the version. (BZ# 789152 ) Bug Fixes BZ# 784119 Prior to this update, the schema compatibility plug-in could, under certain circumstances, leak memory when computing values for inclusion in the constructed entries even if the relevant values were not changed. As a consequence, the performance could decrease rapidly and all available memory was consumed. This update modifies the underlying code so that the memory leaks no longer occur. BZ# 800625 Prior to this update, the directory server could terminate unexpectedly when processing a distinguished name if the relative distinguished name of a compatibility entry contained an escaped special character. This update modifies the plug-in so that special characters are now escaped when generating relative distinguished name values. BZ# 809559 Prior to this update, padding values passed to %link were read as literal values. As a consequence, the values could not use the "%ifeq" expression. This update modifies the underlying code to treat the padding values as expressions using the "%ifeq" expression. Enhancement BZ# 730434 Prior to this update, the plug-ins used the platform-neutral Netscape Portable Runtime (NSPR) read-write locking APIs to manage some of their internal data. This update modifies slapi-nis to use the locking functionality provided by the directory server itself. All users of slapi-nis are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/slapi-nis |
Chapter 3. Considerations | Chapter 3. Considerations This chapter describes the advantages, limitations, and available options for various Red Hat Virtualization components. 3.1. Host Types Use the host type that best suits your environment. You can also use both types of host in the same cluster if required. All managed hosts within a cluster must have the same CPU type. Intel and AMD CPUs cannot co-exist within the same cluster. For information about supported maximums and limits, such as the maximum number of hosts that the Red Hat Virtualization Manager can support, see Supported Limits for Red Hat Virtualization . 3.1.1. Red Hat Virtualization Hosts Red Hat Virtualization Hosts (RHVH) have the following advantages over Red Hat Enterprise Linux hosts: RHVH is included in the subscription for Red Hat Virtualization. Red Hat Enterprise Linux hosts may require additional subscriptions. RHVH is deployed as a single image. This results in a streamlined update process; the entire image is updated as a whole, as opposed to packages being updated individually. Only the packages and services needed to host virtual machines or manage the host itself are included. This streamlines operations and reduces the overall attack vector; unnecessary packages and services are not deployed and, therefore, cannot be exploited. The Cockpit web interface is available by default and includes extensions specific to Red Hat Virtualization, including virtual machine monitoring tools and a GUI installer for the self-hosted engine. Cockpit is supported on Red Hat Enterprise Linux hosts, but must be manually installed. 3.1.2. Red Hat Enterprise Linux hosts Red Hat Enterprise Linux hosts have the following advantages over Red Hat Virtualization Hosts: Red Hat Enterprise Linux hosts are highly customizable, so may be preferable if, for example, your hosts require a specific file system layout. Red Hat Enterprise Linux hosts are better suited for frequent updates, especially if additional packages are installed. Individual packages can be updated, rather than a whole image. 3.2. Storage Types Each data center must have at least one data storage domain. An ISO storage domain per data center is also recommended. Export storage domains are deprecated, but can still be created if necessary. A storage domain can be made of either block devices (iSCSI or Fibre Channel) or a file system. By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. The storage types described in the following sections are supported for use as data storage domains. ISO and export storage domains only support file-based storage types. The ISO domain supports local storage when used in a local storage data center. See: Storage in the Administration Guide . Red Hat Enterprise Linux Storage Administration Guide 3.2.1. NFS NFS versions 3 and 4 are supported by Red Hat Virtualization 4. Production workloads require an enterprise-grade NFS server, unless NFS is only being used as an ISO storage domain. When enterprise NFS is deployed over 10GbE, segregated with VLANs, and individual services are configured to use specific ports, it is both fast and secure. As NFS exports are grown to accommodate more storage needs, Red Hat Virtualization recognizes the larger data store immediately. No additional configuration is necessary on the hosts or from within Red Hat Virtualization. This provides NFS a slight edge over block storage from a scale and operational perspective. See: Network File System (NFS) in the Red Hat Enterprise Linux Storage Administration Guide . Preparing and Adding NFS Storage in the Administration Guide . 3.2.2. iSCSI Production workloads require an enterprise-grade iSCSI server. When enterprise iSCSI is deployed over 10GbE, segregated with VLANs, and utilizes CHAP authentication, it is both fast and secure. iSCSI can also use multipathing to improve high availability. Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted. See: Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide . Adding iSCSI Storage in the Administration Guide . 3.2.3. Fibre Channel Fibre Channel is both fast and secure, and should be taken advantage of if it is already in use in the target data center. It also has the advantage of low CPU overhead as compared to iSCSI and NFS. Fibre Channel can also use multipathing to improve high availability. Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted. See: Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide . Adding FCP Storage in the Administration Guide . 3.2.4. Fibre Channel over Ethernet To use Fibre Channel over Ethernet (FCoE) in Red Hat Virtualization, you must enable the fcoe key on the Manager, and install the vdsm-hook-fcoe package on the hosts. Red Hat Virtualization supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted. See: Online Storage Management in the Red Hat Enterprise Linux Storage Administration Guide . How to Set Up Red Hat Virtualization Manager to Use FCoE in the Administration Guide . 3.2.5. Red Hat Hyperconverged Infrastructure Red Hat Hyperconverged Infrastructure (RHHI) combines Red Hat Virtualization and Red Hat Gluster Storage on the same infrastructure, instead of connecting Red Hat Virtualization to a remote Red Hat Gluster Storage server. This compact option reduces operational expenses and overhead. See: Deploying Red Hat Hyperconverged Infrastructure for Virtualization Deploying Red Hat Hyperconverged Infrastructure for Virtualization On A Single Node Automating RHHI for Virtualization Deployment 3.2.6. POSIX-Compliant FS Other POSIX-compliant file systems can be used as storage domains in Red Hat Virtualization, as long as they are clustered file systems, such as Red Hat Global File System 2 (GFS2), and support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. See: Red Hat Enterprise Linux Global File System 2 Adding POSIX Compliant File System Storage in the Administration Guide . 3.2.7. Local Storage Local storage is set up on an individual host, using the host's own resources. When you set up a host to use local storage, it is automatically added to a new data center and cluster that no other hosts can be added to. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled. For Red Hat Virtualization Hosts, local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk. See: Preparing and Adding Local Storage in the Administration Guide . 3.3. Networking Considerations Familiarity with network concepts and their use is highly recommended when planning and setting up networking in a Red Hat Virtualization environment. Read your network hardware vendor's guides for more information on managing networking. Logical networks may be supported using physical devices such as NICs, or logical devices such as network bonds. Bonding improves high availability, and provides increased fault tolerance, because all network interface cards in the bond must fail for the bond itself to fail. Bonding modes 1, 2, 3, and 4 support both virtual machine and non-virtual machine network types. Modes 0, 5, and 6 only support non-virtual machine networks. Red Hat Virtualization uses Mode 4 by default. It is not necessary to have one device for each logical network, as multiple logical networks can share a single device by using Virtual LAN (VLAN) tagging to isolate network traffic. To make use of this feature, VLAN tagging must also be supported at the switch level. The limits that apply to the number of logical networks that you may define in a Red Hat Virtualization environment are: The number of logical networks attached to a host is limited to the number of available network devices combined with the maximum number of Virtual LANs (VLANs), which is 4096. The number of networks that can be attached to a host in a single operation is currently limited to 50. The number of logical networks in a cluster is limited to the number of logical networks that can be attached to a host as networking must be the same for all hosts in a cluster. The number of logical networks in a data center is limited only by the number of clusters it contains in combination with the number of logical networks permitted per cluster. Important Take additional care when modifying the properties of the Management network ( ovirtmgmt ). Incorrect changes to the properties of the ovirtmgmt network may cause hosts to become unreachable. Important If you plan to use Red Hat Virtualization to provide services for other environments, remember that the services will stop if the Red Hat Virtualization environment stops operating. Red Hat Virtualization is fully integrated with Cisco Application Centric Infrastructure (ACI), which provides comprehensive network management capabilities, thus mitigating the need to manually configure the Red Hat Virtualization networking infrastructure. The integration is performed by configuring Red Hat Virtualization on Cisco's Application Policy Infrastructure Controller (APIC) version 3.1(1) and later, according to the Cisco's documentation . 3.4. Directory Server Support During installation, Red Hat Virtualization Manager creates a default admin user in a default internal domain. This account is intended for use when initially configuring the environment, and for troubleshooting. You can create additional users on the internal domain using ovirt-aaa-jdbc-tool . User accounts created on local domains are known as local users. See Administering User Tasks From the Command Line in the Administration Guide . You can also attach an external directory server to your Red Hat Virtualization environment and use it as an external domain. User accounts created on external domains are known as directory users. Attachment of more than one directory server to the Manager is also supported. The following directory servers are supported for use with Red Hat Virtualization. For more detailed information on installing and configuring a supported directory server, see the vendor's documentation. Microsoft Active Directory Red Hat Enterprise Linux Identity Management Red Hat Directory Server OpenLDAP IBM Security (Tivoli) Directory Server Important A user with permissions to read all users and groups must be created in the directory server specifically for use as the Red Hat Virtualization administrative user. Do not use the administrative user for the directory server as the Red Hat Virtualization administrative user. See: Users and Roles in the Administration Guide . 3.5. Infrastructure Considerations 3.5.1. Local or Remote Hosting The following components can be hosted on either the Manager or a remote machine. Keeping all components on the Manager machine is easier and requires less maintenance, so is preferable when performance is not an issue. Moving components to a remote machine requires more maintenance, but can improve the performance of both the Manager and Data Warehouse. Data Warehouse database and service To host Data Warehouse on the Manager, select Yes when prompted by engine-setup . To host Data Warehouse on a remote machine, select No when prompted by engine-setup , and see Installing and Configuring Data Warehouse on a Separate Machine in Installing Red Hat Virtualization as a standalone Manager with remote databases . To migrate Data Warehouse post-installation, see Migrating Data Warehouse to a Separate Machine in the Data Warehouse Guide . You can also host the Data Warehouse service and the Data Warehouse database separately from one another. Manager database To host the Manager database on the Manager, select Local when prompted by engine-setup . To host the Manager database on a remote machine, see Preparing a Remote PostgreSQL Database in Installing Red Hat Virtualization as a standalone Manager with remote databases before running engine-setup on the Manager. To migrate the Manager database post-installation, see Migrating the Engine Database to a Remote Server Database in the Administration Guide . Websocket proxy To host the websocket proxy on the Manager, select Yes when prompted by engine-setup . Important Self-hosted engine environments use an appliance to install and configure the Manager virtual machine, so Data Warehouse, the Manager database, and the websocket proxy can only be made external post-installation. 3.5.2. Remote Hosting Only The following components must be hosted on a remote machine: DNS Due to the extensive use of DNS in a Red Hat Virtualization environment, running the environment's DNS service as a virtual machine hosted in the environment is not supported. Storage With the exception of local storage , the storage service must not be on the same machine as the Manager or any host. Identity Management IdM ( ipa-server ) is incompatible with the mod_ssl package, which is required by the Manager. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/considerations |
Chapter 15. Backup and restore | Chapter 15. Backup and restore 15.1. Installing and configuring OADP As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The Operator installs Velero 1.12 . You create a default Secret for your backup storage provider and then you install the Data Protection Application. 15.1.1. Installing the OADP Operator You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.12 by using Operator Lifecycle Manager (OLM). The OADP Operator installs Velero 1.12 . Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the OADP Operator . Select the OADP Operator and click Install . Click Install to install the Operator in the openshift-adp project. Click Operators Installed Operators to verify the installation. 15.1.2. About backup and snapshot locations and their secrets You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR). Backup locations You specify AWS S3-compatible object storage as a backup location, such as Multicloud Object Gateway; Ceph RADOS Gateway, also known as Ceph Object Gateway; or MinIO. Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage. Snapshot locations If you use your cloud provider's native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location. If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver. If you use Restic, you do not need to specify a snapshot location because Restic backs up the file system on object storage. Secrets If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret . If the backup and snapshot locations use different credentials, you create two secret objects: Custom Secret for the backup location, which you specify in the DataProtectionApplication CR. Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR. Important The Data Protection Application requires a default Secret . Otherwise, the installation will fail. If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. 15.1.2.1. Creating a default Secret You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location. Note The DataProtectionApplication custom resource (CR) requires a default Secret . Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used. If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file. Prerequisites Your object storage and cloud storage, if any, must use the same credentials. You must configure object storage for Velero. You must create a credentials-velero file for the object storage in the appropriate format. Procedure Create a Secret with the default name: USD oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application. 15.1.3. Configuring the Data Protection Application You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates. 15.1.3.1. Setting Velero CPU and memory resource allocations You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: "1" memory: 1024Mi requests: cpu: 200m memory: 256Mi 1 Specify the node selector to be supplied to Velero podSpec. 2 The resourceAllocations listed are for average usage. 15.1.3.2. Enabling self-signed CA certificates You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error. Prerequisites You must have the OpenShift API for Data Protection (OADP) Operator installed. Procedure Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: ... backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: "false" 2 ... 1 Specify the Base64-encoded CA certificate string. 2 The insecureSkipTLSVerify configuration can be set to either "true" or "false" . If set to "true" , SSL/TLS security is disabled. If set to "false" , SSL/TLS security is enabled. 15.1.4. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with the default name, cloud-credentials , for the snapshot location. This Secret is not referenced in the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Note Velero creates a secret named velero-repo-credentials in the OADP namespace, which contains a default backup repository password. You can update the secret with your own password encoded as base64 before you run your first backup targeted to the backup repository. The value of the key to update is Data[repository-password] . After you create your DPA, the first time that you run a backup targeted to the backup repository, Velero creates a backup repository whose secret is velero-repo-credentials , which contains either the default password or the one you replaced it with. If you update the secret password after the first backup, the new password will not match the password in velero-repo-credentials , and therefore, Velero will not be able to connect with the older backups. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - kubevirt 1 - gcp 2 - csi 3 - openshift 4 resourceTimeout: 10m 5 restic: enable: true 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp 8 default: true credential: key: cloud name: <default_secret> 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 1 The kubevirt plugin is mandatory for OpenShift Virtualization. 2 Specify the plugin for the backup provider, for example, gcp , if it exists. 3 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 4 The openshift plugin is mandatory. 5 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 6 Set this value to false if you want to disable the Restic installation. Restic deploys a daemon set, which means that Restic pods run on each working node. In OADP version 1.2 and later, you can configure Restic for backups by adding spec.defaultVolumesToFsBackup: true to the Backup CR. In OADP version 1.1, add spec.defaultVolumesToRestic: true to the Backup CR. 7 Specify on which nodes Restic is available. By default, Restic runs on all nodes. 8 Specify the backup provider. 9 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 10 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 11 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verify the installation by viewing the OADP resources: USD oc get all -n openshift-adp Example output 15.1.4.1. Enabling CSI in the DataProtectionApplication CR You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots. Prerequisites The cloud provider must support CSI snapshots. Procedure Edit the DataProtectionApplication CR, as in the following example: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication ... spec: configuration: velero: defaultPlugins: - openshift - csi 1 1 Add the csi default plugin. 15.1.5. Uninstalling OADP You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details. 15.2. Backing up and restoring virtual machines Important OADP for OpenShift Virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You back up and restore virtual machines by using the OpenShift API for Data Protection (OADP) . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application with the kubevirt and openshift plugins . Back up virtual machines by creating a Backup custom resource (CR) . Restore the Backup CR by creating a Restore CR . 15.2.1. Additional resources OADP features and plugins Troubleshooting 15.3. Backing up virtual machines Important OADP for OpenShift Virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You back up virtual machines (VMs) by creating an OpenShift API for Data Protection (OADP) Backup custom resource (CR) . The Backup CR performs the following actions: Backs up OpenShift Virtualization resources by creating an archive file on S3-compatible object storage, such as Multicloud Object Gateway , Noobaa, or Minio. Backs up VM disks by using one of the following options: Container Storage Interface (CSI) snapshots on CSI-enabled cloud storage, such as Ceph RBD or Ceph FS. Backing up applications with File System Backup: Kopia or Restic on object storage. Note OADP provides backup hooks to freeze the VM file system before the backup operation and unfreeze it when the backup is complete. The kubevirt-controller creates the virt-launcher pods with annotations that enable Velero to run the virt-freezer binary before and after the backup operation. The freeze and unfreeze APIs are subresources of the VM snapshot API. See About virtual machine snapshots for details. You can add hooks to the Backup CR to run commands on specific VMs before or after the backup operation. You schedule a backup by creating a Schedule CR instead of a Backup CR. 15.3.1. Creating a Backup CR To back up Kubernetes resources, internal images, and persistent volumes (PVs), create a Backup custom resource (CR). Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. Backup location prerequisites: You must have S3 object storage configured for Velero. You must have a backup location configured in the DataProtectionApplication CR. Snapshot location prerequisites: Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots. For CSI snapshots, you must create a VolumeSnapshotClass CR to register the CSI driver. You must have a volume location configured in the DataProtectionApplication CR. Procedure Retrieve the backupStorageLocations CRs by entering the following command: USD oc get backupStorageLocations -n openshift-adp Example output NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m Create a Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app=<label_1> app=<label_2> app=<label_3> orLabelSelectors: 6 - matchLabels: app=<label_1> app=<label_2> app=<label_3> 1 Specify an array of namespaces to back up. 2 Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included. 3 Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. 4 Specify the name of the backupStorageLocations CR. 5 Map of {key,value} pairs of backup resources that have all of the specified labels. 6 Map of {key,value} pairs of backup resources that have one or more of the specified labels. Verify that the status of the Backup CR is Completed : USD oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}' 15.3.1.1. Backing up persistent volumes with CSI snapshots You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR. Prerequisites The cloud provider must support CSI snapshots. You must enable CSI in the DataProtectionApplication CR. Procedure Add the metadata.labels.velero.io/csi-volumesnapshot-class: "true" key-value pair to the VolumeSnapshotClass CR: apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: "true" driver: <csi_driver> deletionPolicy: Retain You can now create a Backup CR. 15.3.1.2. Backing up applications with Restic You back up Kubernetes resources, internal images, and persistent volumes with Restic by editing the Backup custom resource (CR). You do not need to specify a snapshot location in the DataProtectionApplication CR. Important Restic does not support backing up hostPath volumes. For more information, see additional Restic limitations . Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. You must not disable the default Restic installation by setting spec.configuration.restic.enable to false in the DataProtectionApplication CR. The DataProtectionApplication CR must be in a Ready state. Procedure Edit the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1 ... 1 In OADP version 1.2 and later, add the defaultVolumesToFsBackup: true setting within the spec block. In OADP version 1.1, add defaultVolumesToRestic: true . 15.3.1.3. Creating backup hooks You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR). Pre hooks run before the pod is backed up. Post hooks run after the backup. Procedure Add a hook to the spec.hooks block of the Backup CR, as in the following example: apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11 ... 1 Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Optional: You can specify namespaces to which the hook does not apply. 3 Currently, pods are the only supported resource that hooks can apply to. 4 Optional: You can specify resources to which the hook does not apply. 5 Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all namespaces. 6 Array of hooks to run before the backup. 7 Optional: If the container is not specified, the command runs in the first container in the pod. 8 This is the entrypoint for the init container being added. 9 Allowed values for error handling are Fail and Continue . The default is Fail . 10 Optional: How long to wait for the commands to run. The default is 30s . 11 This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks. 15.3.2. Additional resources Overview of CSI volume snapshots 15.4. Restoring virtual machines You restore an OpenShift API for Data Protection (OADP) Backup custom resource (CR) by creating a Restore CR . You can add hooks to the Restore CR to run commands in init containers, before the application container starts, or in the application container itself. 15.4.1. Creating a Restore CR You restore a Backup custom resource (CR) by creating a Restore CR. Prerequisites You must install the OpenShift API for Data Protection (OADP) Operator. The DataProtectionApplication CR must be in a Ready state. You must have a Velero Backup CR. The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed. Procedure Create a Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3 1 Name of the Backup CR. 2 Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example, po for pods ) or fully-qualified. If unspecified, all resources are included. 3 Optional: The restorePVs parameter can be set to false to turn off restore of PersistentVolumes from VolumeSnapshot of Container Storage Interface (CSI) snapshots or from native snapshots when VolumeSnapshotLocation is configured. Verify that the status of the Restore CR is Completed by entering the following command: USD oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}' Verify that the backup resources have been restored by entering the following command: USD oc get all -n <namespace> 1 1 Namespace that you backed up. If you use Restic to restore DeploymentConfig objects or if you use post-restore hooks, run the dc-restic-post-restore.sh cleanup script by entering the following command: USD bash dc-restic-post-restore.sh <restore-name> Note During the restore process, the OADP Velero plug-ins scale down the DeploymentConfig objects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restored DeploymentConfig pods immediately on restore and to allow Restic and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales any DeploymentConfig objects back up to the appropriate number of replicas. Example 15.1. dc-restic-post-restore.sh cleanup script #!/bin/bash set -e # if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD="sha256sum" else CHECKSUM_CMD="shasum -a 256" fi label_name () { if [ "USD{#1}" -le "63" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo "USD{1:0:57}USD{sha:0:6}" } OADP_NAMESPACE=USD{OADP_NAMESPACE:=openshift-adp} if [[ USD# -ne 1 ]]; then echo "usage: USD{BASH_SOURCE} restore-name" exit 1 fi echo using OADP Namespace USDOADP_NAMESPACE echo restore: USD1 label=USD(label_name USD1) echo label: USDlabel echo Deleting disconnected restore pods oc delete pods -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{","}{.metadata.name}{","}{.metadata.annotations.oadp\.openshift\.io/original-replicas}{","}{.metadata.annotations.oadp\.openshift\.io/original-paused}{"\n"}') do IFS=',' read -ra dc_arr <<< "USDdc" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done 15.4.1.1. Creating restore hooks You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR). You can create two types of restore hooks: An init hook adds an init container to a pod to perform setup tasks before the application container starts. If you restore a Restic backup, the restic-wait init container is added before the restore hook init container. An exec hook runs commands or scripts in a container of a restored pod. Procedure Add a hook to the spec.hooks block of the Restore CR, as in the following example: apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - "psql < /backup/backup.sql" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9 1 Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces. 2 Currently, pods are the only supported resource that hooks can apply to. 3 Optional: This hook only applies to objects matching the label selector. 4 Optional: Timeout specifies the maximum length of time Velero waits for initContainers to complete. 5 Optional: If the container is not specified, the command runs in the first container in the pod. 6 This is the entrypoint for the init container being added. 7 Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely. 8 Optional: How long to wait for the commands to run. The default is 30s . 9 Allowed values for error handling are Fail and Continue : Continue : Only command failures are logged. Fail : No more restore hooks run in any container in any pod. The status of the Restore CR will be PartiallyFailed . | [
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - kubevirt 1 - gcp 2 - csi 3 - openshift 4 resourceTimeout: 10m 5 restic: enable: true 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp 8 default: true credential: key: cloud name: <default_secret> 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app=<label_1> app=<label_2> app=<label_3> orLabelSelectors: 6 - matchLabels: app=<label_1> app=<label_2> app=<label_3>",
"oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" driver: <csi_driver> deletionPolicy: Retain",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh <restore-name>",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } OADP_NAMESPACE=USD{OADP_NAMESPACE:=openshift-adp} if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo using OADP Namespace USDOADP_NAMESPACE echo restore: USD1 label=USD(label_name USD1) echo label: USDlabel echo Deleting disconnected restore pods delete pods -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/backup-and-restore |
2. Deployment | 2. Deployment Upstart In Red Hat Enterprise Linux 6, init from the sysvinit package has been replaced with Upstart , an event-based init system. This system handles the starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running. For more information on Upstart itself, refer to the init(8) man page. Processes are known to Upstart as jobs and are defined by files in the /etc/init directory. Upstart is very well documented via man pages. Command overview is in init(8) and job syntax is described in init(5) . Upstart provides the following behavioral changes in Red Hat Enterprise Linux 6: The /etc/inittab file is deprecated, and is now used only for setting up the default runlevel via the initdefault line. Other configuration is done via upstart jobs in the /etc/init directory. The number of active tty consoles is now set by the ACTIVE_CONSOLES variable in /etc/sysconfig/init , which is read by the /etc/init/start-ttys.conf job. The default value is ACTIVE_CONSOLES=/dev/tty[1-6] , which starts a getty on tty1 through tty6. A serial getty is still automatically configured if the serial console is the primary system console. In prior releases, this was done by kudzu , which would edit /etc/inittab . In Red Hat Enterprise Linux 6, configuration of the primary serial console is handled by /etc/init/serial.conf . To configure a getty running on a non-default serial console, you must now write an Upstart job instead of editing /etc/inittab . For example, if a getty on ttyS1 is desired, the following job file ( /etc/init/serial-ttyS1.conf ) would work: As in prior releases, you should still make sure that ttyS1 is in /etc/securetty if you wish to allow root logins on this getty. There are some features from prior releases that are not supported in the move to Upstart. Among these are: Custom runlevels 7, 8 and 9. These custom runlevels can no longer be used. Using /etc/shutdown.allow for defining who can shut the machine down. System z Performance Some of the default tunables in Red Hat Enterprise Linux 6 are currently not optimally configured for System z workloads. Under most circumstances, System z machines will perform better using the following recommendations. Dirty Ratio It is recommended that the dirty ratio be set to 40 (Red Hat Enterprise Linux 6 default 20) Changing this tunable tells the system to not spend as much process time too early to write out dirty pages. Add the following line to /etc/sysctl.conf to set this tunable: Scheduler To increase the average time a process runs continuously and also improve the cache utilization and server style workload throughput at minor latency cost it is recommended to set the following higher values in /etc/sysctl.conf. Additionally, deactivating the Fair-Sleepers feature improves performance on a System z machine. To achieve this, set the following value in /etc/sysctl.conf False positive hung task reports It is recommended to prevent false positive hung task reports (which are rare, but might occur under very heavy overcommitment ratios). This feature can be used, but to improve performance, deactivate it by default by setting the following parameter in /etc/sysctl.conf: irqbalance service on the POWER architecture On POWER architecture, the irqbalance service is recommended for automatic device Interrupt Request (IRQ) distribution across system CPUs to ensure optimal I/O performance. The irqbalance service is normally installed and configured to run during Red Hat Enterprise Linux 6 installation. However, under some circumstances, the irqbalance service is not installed by default. To confirm that the irqbalance service is running, execute the following command as root: If the service is running, command will return a message similar to: However, if the message lists the service as stopped , execute the following commands as root to start the irqbalance service: If the output of the service irqbalance status command lists irqbalance as an unrecognized service , use yum to install the irqbalance package, and then start the service. Note The system does not need to be restarted after starting the irqbalance service Setting the console log level Use of the LOGLEVEL parameter in /etc/sysconfig/init to set the console loglevel is no longer supported. To set the console loglevel in Red Hat Enterprise Linux 6, pass loglevel=<number> ' as a boot time parameter. Upgrading from pre-release versions Upgrading to Red Hat Enterprise Linux 6 from Red Hat Enterprise Linux 5 or from pre-release versions of Red Hat Enterprise Linux 6 is not supported. If an upgrade of this type is attempted issues may be encountered including upgrading Java/OpenJDK packages. To work around this, manually remove the old packages and reinstall. 2.1. Known Issues When a system is configured to require smart card authentication, and there is no smartcard currently plugged into the system, then users might see the debug message: This message can be safely ignored. Red Hat Enterprise Linux 6 Beta features Dovecot version 2.0. The configuration files used by Dovecot 2.0 are significantly different from those found in dovecot 1.0.x, the version shipped in releases of Red Hat Enterprise Linux. Specifically, /etc/dovecot.conf has been split into /etc/dovecot/dovecot.conf and /etc/dovecot/conf.d/*.conf Under some circumstances, the readahead service may cause the auditd service to stop. To work around this potential issue, disable the readahead collector by adding the following lines to the /etc/sysconfig/readahead configuration file: Alternatively, the readahead package can be removed entirely. An error exists in the communication process between the samba daemon and the Common Unix Printing System (CUPS) scheduler. Consequently, the first time a print job is submitted to a Red Hat Enterprise Linux 6 system via Server Message Block (SMB), a timeout will occur. To work around this issue, use the following command to create a CUPS certificate before the first print job is submitted: Under some circumstances, using the rhn_register command to register a system with the Red Hat Network (RHN) might fail. When this issue is encountered, the rhn_register command will return an error similar to: To work around this issue, set the following environment variable, then run the rhn_register command again: If a user has a .bashrc which outputs to stderr, the user will be unable to sftp into their account. From the user's point of view, the sftp session is immediately terminated after authentication. 2.1.1. Architecture Specific Known Issues 2.1.1.1. System z The minimum hardware requirement to run Red Hat Enterprise Linux Beta is IBM System z9 (or better). The system may not IPL (i.e. boot) on earlier System Z hardware (e.g. z900 or z990) 2.1.1.2. IBM POWER (64-bit) When network booting an IBM POWER5 series system, you may encounter an error such as: If the path that locates the kernel and ramdisk is greater than 63 characters long, it will overflow a firmware buffer and the firmware will drop into the debugger. POWER6 and POWER7 firmware includes a correction for this problem. Note that IBM POWER5 series is not a supported system. On some machines yaboot may not boot, returning the error message: To work around this issue, change real-base from to c00000 . Real-base can be obtained from OpenFirmware prompt with the printenv command and set with setenv command. Remote installs on IBM BladeCenter JS22 servers may encounter the following error message: To work around this issue, specify the following GUI parameters: Some HP Proliant servers may report incorrect CPU frequency values in /proc/cpuinfo or /sys/device/system/cpu/*/cpufreq. This is due to the firmware manipulating the CPU frequency without providing any notification to the operating system. To avoid this ensure that the "HP Power Regulator" option in the BIOS is set to "OS Control". An alternative available on more recent systems is to set "Collaborative Power Control" to "Enabled". filecap crashes with a segmentation fault when run directly on an empty file. For example: To work around this, run filecap on the directory that contains the empty file, and search the results for the required information. For example: A change in the package that the sos tool uses to determine the installed version of Red Hat Enterprise Linux will cause the tool to incorrectly identify the major release version. This adversely impacts a small number of non-default sos plugins and may cause incomplete information to be captured from the system when these plugins are enabled. The affected plugins are: general (only when using the non-default all_logs option) cluster (diagnostics may not be run) Users affected by this problem should retrieve any missing data manually from systems. | [
"This service maintains a getty on /dev/ttyS1. start on stopped rc RUNLEVEL=[2345] stop on starting runlevel [016] respawn exec /sbin/agetty /dev/ttyS1 115200 vt100-nav",
"vm.dirty_ratio = 40",
"kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000 kernel.sched_tunable_scaling = 0 kernel.sched_latency_ns = 80000000",
"kernel.sched_features = 15834234",
"kernel.hung_task_timeout_secs = 0",
"service irqbalance status",
"irqbalance (pid 1234) is running",
"service irqbalance start chkconfig --level 345 irqbalance on",
"install irqbalance service irqbalance start",
"ERROR: pam_pkcs11.c:334: no suitable token available'",
"READAHEAD_COLLECT=\"no\" READAHEAD_COLLECT_ON_RPM=\"no\"",
"lpstat -E -s",
"rhn_register Segmentation fault (core dumped) or rhn_register ***MEMORY-ERROR***: rhn_register[11525]: GSlice: assertion failed: sinfo->n_allocated > 0 Aborted (core dumped)",
"G_SLICE=always-malloc",
"DEFAULT CATCH!, exception-handler=fff00300",
"Cannot load ramdisk.image.gz: Claim failed for initrd memory at 02000000 rc=ffffffff",
"No video available. Your server may be in an unsupported resolution/refresh rate.",
"video=SVIDEO-1:d radeon.svideo=0",
"filecap /path/to/empty_file Segmentation fault (core dumped)",
"filecap /path/to/ | grep empty_file"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/deployment |
Chapter 4. New features | Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.4. 4.1. Installer and image creation Anaconda replaces the original boot device NVRAM variable list with new values Previously, booting from NVRAM could lead to boot system failure due to the entries with the incorrect values in the boot device list. With this update the problem is fixed, but the list of devices is cleared when updating the boot device NVRAM variable. (BZ#1854307) Graphical installation of KVM virtual machines on IBM Z is now available When using the KVM hypervisor on IBM Z hardware, you can now use the graphical installation when creating virtual machines (VMs). Now, when a user executes the installation in KVM, and QEMU provides a virtio-gpu driver, the installer automatically starts the graphical console. The user can switch to text or VNC mode by appending the inst.text or inst.vnc boot parameters in the VM's kernel command line. (BZ#1609325) Warnings for deprecated kernel boot arguments Anaconda boot arguments without the inst. prefix (for example, ks , stage2 , repo and so on) are deprecated starting RHEL7. These arguments will be removed in the major RHEL release. With this release, appropriate warning messages are displayed when the boot arguments are used without the inst prefix. The warning messages are displayed in dracut when booting the installation and also when the installation program is started on a terminal. Following is a sample warning message that is displayed on a terminal: Deprecated boot argument %s must be used with the inst. prefix. Please use inst.%s instead. Anaconda boot arguments without inst. prefix have been deprecated and will be removed in a future major release. Following is a sample warning message that is displayed in dracut : USD1 has been deprecated. All usage of Anaconda boot arguments without the inst. prefix have been deprecated and will be removed in a future major release. Please use USD2 instead. ( BZ#1897657 ) 4.2. RHEL for Edge Support to specify the kernel name as customization for RHEL for Edge image types When creating OSTree commits for RHEL for Edge images, only one kernel package can be installed at a time, otherwise the commit creation fails in rpm-ostree . This prevents RHEL for Edge from adding alternative kernels, in particular, the real-time kernel ( kernel-rt ). With this enhancement, when creating a blueprint for RHEL for Edge image using the CLI, you can define the name of the kernel to be used in an image, by setting the customizations.kernel.name key. If you do not specify any kernel name, the image include the default kernel package. ( BZ#1960043 ) 4.3. Software management New fill_sack_from_repos_in_cache function is now supported in DNF API With this update, the new DNF API fill_sack_from_repos_in_cache function has been introduced which allows to load repositories only from the cached solv , solvx files, and the repomd.xml file. As a result, if the user manages dnf cache, it is possible to save resources without having duplicate information ( xml and solv ), and without processing xml into solv . ( BZ#1865803 ) createrepo_c now automatically adds modular metadata to repositories Previously, running the createrepo_c command on RHEL8 packages to create a new repository did not include modular repodata in this repository. Consequently, it caused various problems with repositories. With this update, createrepo_c : scans for modular metadata merges the found module YAML files into a single modular document modules.yaml automatically adds this document to the repository. As a result, adding modular metadata to repositories is now automatic and no longer has to be done as a separate step using the modifyrepo_c command. ( BZ#1795936 ) The ability to mirror a transaction between systems within DNF is now supported With this update, the user can store and replay a transaction within DNF. To store a transaction from DNF history into a JSON file, run the dnf history store command. To replay the transaction later on the same machine, or on a different one, run the dnf history replay command. Comps groups operations storing and replaying is supported. Module operations are not yet supported, and consequently, are not stored or replayed. ( BZ#1807446 ) createrepo_c rebased to version 0.16.2 The createrepo_c packages have been rebased to version 0.16.2 which provides the following notable changes over the version: Added module metadata support for createrepo_c . Fixed various memory leaks (BZ#1894361) The protect_running_kernel configuration option is now available. With this update, the protect_running_kernel configuration option for the dnf and microdnf commands has been introduced. This option controls whether the package corresponding to the running version of the kernel is protected from removal. As a result, the user can now disable protection of the running kernel. ( BZ#1698145 ) 4.4. Shells and command-line tools OpenIPMI rebased to version 2.0.29 The OpenIPMI packages have been upgraded to version 2.0.29. Notable changes over the version include: Fixed memory leak, variable binding, and missing error messages. Added support for IPMB . Added support for registration of individual group extension in the lanserv . (BZ#1796588) freeipmi rebased to version 1.6.6 The freeipmi packages have been upgraded to version 1.6.6. Notable changes over the version include: Fixed memory leaks and typos in the source code. Implemented workarounds for the following known issues: unexpected completion code. Dell Poweredge FC830. out of order packets with lan/rmcpplus ipmb . Added support for new Dell, Intel, and Gigabyte devices. Added support for the interpretation of system information and events. (BZ#1861627) opal-prd rebased to version 6.6.3 The opal-prd package has been rebased to version 6.6.3. Notable changes include: Added an offline worker process handle page for opal-prd daemon. Fixed the bug for opal-gard on POWER9P so that the system can identify the chip targets for gard records. Fixed false negatives in wait_for_all_occ_init() of occ command. Fixed OCAPI_MEM BAR values in hw/phys-map . Fixed warnings for Inconsistent MSAREA in hdata/memory.c . For sensors in occ: Fixed sensor values zero bug. Fixed the GPU detection code. Skipped sysdump retrieval in MPIPL boot. Fixed IPMI double-free in the Mihawk platform. Updated non-MPIPL scenario in fsp/dump . For hw/phb4: Verified AER support before initialising AER regs. Enabled error reporting. Added new smp-cable-connector VPD keyword in hdata . (BZ#1844427) opencryptoki rebased to version 3.15.1 The opencryptoki packages have been rebased to version 3.15.1. Notable changes include: Fixed segfault in C_SetPin . Fixed usage of EVP_CipherUpdate and EVP_CipherFinal . Added utility to migrate the token repository to FIPS compliant encryption. For pkcstok_migrate tool: Fixed NVTOK.DAT conversion on Little Endian platforms. Fixed private and public token object conversion on Little Endian platforms. Fixed storing of public token objects in the new data format. Fixed the parameter checking mechanism in dh_pkcs_derive . Corrected soft token model name. Replaced deprecated OpenSSL interfaces in mech_ec.c file and in ICA , TPM , and Soft tokens. Replaced deprecated OpenSSL AES/3DES interfaces in sw_crypt.c file. Added support for ECC mechanism in Soft token. Added IBM specific SHA3 HMAC and SHA512/224/256 HMAC mechanisms in the Soft token. Added support for key wrapping with CKM_RSA_PKCS in CCA. For EP11 crypto stack: Fixed ep11_get_keytype to recognize CKM_DES2_KEY_GEN . Fixed error trace in token_specific_rng . Enabled specific FW version and API in HSM simulation. Fixed Endian bug in X9.63 KDF . Added an error message for handling p11sak remove-key command . Fixed compiling issues with C++. Fixed the problem with C_Get/SetOperationState and digest contexts. Fixed pkcscca migration fails with usr/sb2 . (BZ#1847433) powerpc-utils rebased to version 1.3.8 The powerpc-utils packages have been rebased to version 1.3.8. Notable changes include: Commands that do not depend on Perl are now moved to the core subpackage. Added support for Linux Hybrid Network Virtualization. Updated safe bootlist. Added vcpustat utility. Added support for cpu-hotplug in lparstat command. Added switch to print Scaled metrics in lparstat command. Added helper function to calculate the delta, scaled timebase, and to derive PURR/SPURR values. For ofpathname utility: Improved the speed for l2of_scsi() . Fixed the udevadm location. Added partition to support l2od_ide() and l2of_scsi() . Added support for the plug ID of a SCSI/SATA host. Fixed the segfault condition on the unsupported connector type. Added tools to support migration of SR_IOV to a hybrid virtual network. Fixed the format-overflow warnings. Fixed the bash command substitution warning using the lsdevinfo utility. Fixed boot-time bonding interface cleanup. (BZ#1853297) New kernel cmdline option now generates network device name The net_id built-in from systemd-udevd service gains a new kernel cmdline option net.naming-scheme=SCHEME_VERSION . Based on the value of the SCHEME_VERSION , a user can select a version of the algorithm that will generate the network device name. For example, to use the features of net_id built-in in RHEL 8.4, set the value of the SCHEME_VERSION to rhel-8.4 . Similarly, you can set the value of the SCHEME_VERSION to any other minor release that includes the required change or fix. (BZ#1827462) 4.5. Infrastructure services Difference in default postfix-3.5.8 behavior For better RHEL-8 backward compatibility, the behavior of the postfix-3.5.8 update differs from the default upstream postfix-3.5.8 behavior. For the default upstream postfix-3.5.8 behavior, run the following commands: # postconf info_log_address_format=external # postconf smtpd_discard_ehlo_keywords= # postconf rhel_ipv6_normalize=yes For details, see the /usr/share/doc/postfix/README-RedHat.txt file. If the incompatible functionalities are not used or RHEL-8 backward compatibility is the priority, no steps are necessary. ( BZ#1688389 ) BIND rebased to version 9.11.26 The bind packages have been updated to version 9.11.26. Notable changes include: Changed the default EDNS buffer size from 4096 to 1232 bytes. This change will prevent the loss of fragmented packets in some networks. Increased the default value of max-recursion-queries from 75 to 100. Related to CVE-2020-8616. Fixed the problem of reused dead nodes in lib/dns/rbtdb.c file in named . Fixed the crashing problem in the named service when cleaning the reused dead nodes in the lib/dns/rbtdb.c file. Fixed the problem of configured multiple forwarders sometimes occurring in the named service. Fixed the problem of the named service of assigning incorrect signed zones with no DS record at the parent as bogus. Fixed the missing DNS cookie response over UDP . ( BZ#1882040 ) unbound configuration now provides enhanced logging output With this enhancement, the following three options have been added to the unbound configuration: log-servfail enables log lines that explain the reason for the SERVFAIL error code to clients. log-local-actions enables logging of all local zone actions. log-tag-queryreply enables tagging of log queries and log replies in the log file. ( BZ#1850460 ) Multiple vulnerabilities fixed with ghostscript-9.27 The ghostscript-9.27 release contains security fixes for the following vulnerabilities: CVE-2020-14373 CVE-2020-16287 CVE-2020-16288 CVE-2020-16289 CVE-2020-16290 CVE-2020-16291 CVE-2020-16292 CVE-2020-16293 CVE-2020-16294 CVE-2020-16295 CVE-2020-16296 CVE-2020-16297 CVE-2020-16298 CVE-2020-16299 CVE-2020-16300 CVE-2020-16301 CVE-2020-16302 CVE-2020-16303 CVE-2020-16304 CVE-2020-16305 CVE-2020-16306 CVE-2020-16307 CVE-2020-16308 CVE-2020-16309 CVE-2020-16310 CVE-2020-17538 ( BZ#1874523 ) Tuned rebased to version 2.15-1. Notable changes include: Added service plugin for Linux services control. Improved scheduler plugin. ( BZ#1874052 ) DNSTAP now records incoming detailed queries. DNSTAP provides an advanced way to monitor and log details of incoming name queries. It also records sent answers from the named service. Classic query logging of the named service has a negative impact on the performance of the named service. As a result, DNSTAP offers a way to perform continuous logging of detailed incoming queries without impacting the performance penalty. The new dnstap-read utility allows you to analyze the queries running on a different system. ( BZ#1854148 ) SpamAssassin rebased to version 3.4.4 The SpamAssassin package has been upgraded to version 3.4.4. Notable changes include: OLEVBMacro plugin has been added. New functions check_rbl_ns , check_rbl_rcvd , check_hashbl_bodyre , and check_hashbl_uris have been added. ( BZ#1822388 ) Key algorithm can be changed using the OMAPI shell With this enhancement, users can now change the key algorithm. The key algorithm that was hardcoded as HMAC-MD5 is not considered secure anymore. As a result, users can use the omshell command to change the key algorithm. ( BZ#1883999 ) Sendmail now supports TLSFallbacktoClear configuration With this enhancement, if the outgoing TLS connection fails, the sendmail client will fall back to the plaintext. This overcomes the TLS compatibility problems with the other parties. Red Hat ships sendmail with the TLSFallbacktoClear option disabled by default. ( BZ#1868041 ) tcpdump now allows viewing RDMA capable devices This enhancement enables support for capturing RDMA traffic with tcpdump . It allows users to capture and analyze offloaded RDMA traffic with the tcpdump tool. As a result, users can use tcpdump to view RDMA capable devices, capture RoCE and VMA traffic, and analyze its content. (BZ#1743650) 4.6. Security libreswan rebased to 4.3 The libreswan packages have been upgraded to version 4.3. Notable changes over the version include: IKE and ESP over TCP support (RFC 8229) IKEv2 Labeled IPsec support IKEv2 leftikeport/rightikeport support Experimental support for Intermediate Exchange Extended Redirect support for loadbalancing Default IKE lifetime changed from 1 h to 8 h for increased interoperability :RSA sections in the ipsec.secrets file are no longer required Fixed Windows 10 rekeying Fixed sending certificate for ECDSA authentication Fixes for MOBIKE and NAT-T ( BZ#1891128 ) IPsec VPN now supports TCP transport This update of the libreswan packages adds support for IPsec-based VPNs over TCP encapsulation as described in RFC 8229. The addition helps establish IPsec VPNs on networks that prevent traffic using Encapsulating Security Payload (ESP) and UDP. As a result, administrators can configure VPN servers and clients to use TCP either as a fallback or as the main VPN transport protocol. (BZ#1372050) Libreswan now supports IKEv2 for Labeled IPsec The Libreswan Internet Key Exchange (IKE) implementation now includes Internet Key Exchange version 2 (IKEv2) support of Security Labels for IPsec. With this update, systems that use security labels with IKEv1 can be upgraded to IKEv2. (BZ#1025061) libpwquality rebased to 1.4.4 The libpwquality package has been rebased to version 1.4.4. This release includes multiple bug fixes and translation updates. Most notably, the following setting options have been added to the pwquality.conf file: retry enforce_for_root local_users_only ( BZ#1537240 ) p11-kit rebased to 0.23.19 The p11-kit packages have been upgraded from version 0.23.14 to version 0.23.19. The new version fixes several bugs and provides various enhancements, notably: Fixed CVE-2020-29361, CVE-2020-29362, CVE-2020-29363 security issues. p11-kit now supports building through the meson build system. (BZ#1887853) pyOpenSSL rebased to 19.0.0 The pyOpenSSL packages have been rebased to upstream version 19.0.0. This version provides bug fixes and enhancements, most notably: Improved TLS 1.3 support with openssl version 1.1.1. No longer raising an error when trying to add a duplicate certificate with X509Store.add_cert Improved handling of X509 certificates containing NUL bytes in components (BZ#1629914) SCAP Security Guide rebased to 0.1.54 The scap-security-guide packages have been rebased to upstream version 0.1.54, which provides several bug fixes and improvements. Most notably: The Operating System Protection Profile (OSPP) has been updated in accordance with the Protection Profile for General Purpose Operating Systems for Red Hat Enterprise Linux 8.4. The ANSSI family of profiles based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. The content contains profiles implementing rules of the Minimum, Intermediary and Enhanced hardening levels. The Security Technical Implementation Guide ( STIG ) security profile has been updated, and it implements rules from the recently-released version V1R1. ( BZ#1889344 ) OpenSCAP rebased to 1.3.4 The OpenSCAP packages have been rebased to upstream version 1.3.4. Notable fixes and enhancements include: Fixed certain memory issues that were causing systems with large amounts of files to run out of memory. OpenSCAP now treats GPFS as a remote file system. Proper handling of OVALs with circular dependencies between definitions. Improved yamlfilecontent : updated yaml-filter , extended the schema and probe to be able to work with a set of values in maps. Fixed numerous warnings (GCC and Clang). Numerous memory management fixes. Numerous memory leak fixes. Platform elements in XCCDF files are now properly resolved in accordance with the XCCDF specification. Improved compatibility with uClibc. Local and remote file system detection methods improved. Fixed dpkginfo probe to use pkgCacheFile instead of manually opening the cache. OpenSCAP scan report is now a valid HTML5 document. Fixed unwanted recursion in the file probe. ( BZ#1887794 ) The RHEL 8 STIG security profile updated to version V1R1 With the release of the RHBA-2021:1886 advisory, the DISA STIG for Red Hat Enterprise Linux 8 profile in the SCAP Security Guide has been updated to align with the latest version V1R1 . The profile is now also more stable and better aligns with the RHEL 8 STIG (Security Technical Implementation Guide) manual benchmark provided by the Defense Information Systems Agency (DISA). This first iteration brings approximately 60% of coverage with regards to the STIG. You should use only the current version of this profile because the draft profile is no longer valid. Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. ( BZ#1918742 ) New DISA STIG profile compatible with Server with GUI installations A new profile, DISA STIG with GUI , has been added to the SCAP Security Guide with the release of the RHBA-2021:4098 advisory. This profile is derived from the DISA STIG profile and is compatible with RHEL installations that selected the Server with GUI package group. The previously existing stig profile was not compatible with Server with GUI because DISA STIG demands uninstalling any Graphical User Interface. However, this can be overridden if properly documented by a Security Officer during evaluation. As a result, the new profile helps when installing a RHEL system as a Server with GUI aligned with the DISA STIG profile. ( BZ#2005431 ) Profiles for ANSSI-BP-028 Minimal, Intermediary and Enhanced levels are now available in SCAP Security Guide With the new profiles, you can harden the system to the recommendations from the French National Security Agency (ANSSI) for GNU/Linux Systems at the Minimal, Intermediary and Enhanced hardening levels. As a result, you can configure and automate compliance of your RHEL 8 systems according to your required ANSSI hardening level by using the ANSSI Ansible Playbooks and the ANSSI SCAP profiles. ( BZ#1778188 ) scap-workbench can now scan remote systems using sudo privileges The scap-workbench GUI tool now supports scanning remote systems using passwordless sudo access. This feature reduces the security risk imposed by supplying root's credentials. Be cautious when using scap-workbench with passwordless sudo access and the remediate option. Red Hat recommends dedicating a well-secured user account just for the OpenSCAP scanner. ( BZ#1877522 ) rhel8-tang container image is now available With this release, the rhel8/rhel8-tang container image is available in the registry.redhat.io catalog. The container image provides Tang-server decryption capabilities for Clevis clients that run either in OpenShift Container Platform (OCP) clusters or in separate virtual machines. (BZ#1913310) Clevis rebased to version 15 The clevis packages have been rebased to upstream version 15. This version provides many bug fixes and enhancements over the version, most notably: Clevis now produces a generic initramfs and no longer automatically adds the rd.neednet=1 parameter to the kernel command line. Clevis now properly handles incorrect configurations that use the sss pin, and the clevis encrypt sss sub-command returns outputs that indicate the error cause. ( BZ#1887836 ) Clevis no longer automatically adds rd.neednet=1 Clevis now correctly produces a generic initrd (initial ramdisk) without host-specific configuration options by default. As a result, Clevis no longer automatically adds the rd.neednet=1 parameter to the kernel command line. If your configuration uses the functionality, you can either enter the dracut command with the --hostonly-cmdline argument or create the clevis.conf file in the /etc/dracut.conf.d and add the hostonly_cmdline=yes option to the file. A Tang binding must be present during the initrd build process. ( BZ#1853651 ) New package: rsyslog-udpspoof The rsyslog-udpspoof subpackage has been added back to RHEL 8. This module is similar to the regular UDP forwarder, but permits relaying syslog between different network segments while maintaining the source IP in the syslog packets. ( BZ#1869874 ) fapolicyd rebased to 1.0.2 The fapolicyd packages have been rebased to upstream version 1.0.2. This version provides many bug fixes and enhancements over the version, most notably: Added the integrity configuration option for enabling integrity checks through: Comparing file sizes Comparing SHA-256 hashes Integrity Measurement Architecture (IMA) subsystem The fapolicyd RPM plugin now registers any system update that is handled by either the YUM package manager or the RPM Package Manager. Rules now can contain GID in subjects. You can now include rule numbers in debug and syslog messages. ( BZ#1887451 ) New RPM plugin notifies fapolicyd about changes during RPM transactions This update of the rpm packages introduces a new RPM plugin that integrates the fapolicyd framework with the RPM database. The plugin notifies fapolicyd about installed and changed files during an RPM transaction. As a result, fapolicyd now supports integrity checking. Note that the RPM plugin replaces the YUM plugin because its functionality is not limited to YUM transactions but covers also changes by RPM. ( BZ#1923167 ) 4.7. Networking The PTP capabilities output format of the ethtool utility has changed Starting with RHEL 8.4, the ethtool utility uses the netlink interface instead of the ioctl() system call to communicate with the kernel. Consequently, when you use the ethtool -T <network_controller> command, the format of Precision Time Protocol (PTP) values changes. Previously, with the ioctl() interface, ethtool translated the capability bit names by using an ethtool -internal string table and, the ethtool -T <network_controller> command displayed, for example: With the netlink interface, ethtool receives the strings from the kernel. These strings do not include the internal SOF_TIMESTAMPING_* names. Therefore, ethtool -T <network_controller> now displays, for example: If you use the PTP capabilities output of ethtool in scripts or applications, update them accordingly. (JIRA:RHELDOCS-18188) XDP is conditionally supported Red Hat supports the eXpress Data Path (XDP) feature only if all of the following conditions apply: You load the XDP program on an AMD or Intel 64-bit architecture You use the libxdp library to load the program into the kernel The XDP program does not use the XDP hardware offloading In RHEL 8.4, XDP_TX and XDP_REDIRECT return codes are now supported in XDP programs. For details about unsupported XDP features, see XDP features that are available as Technology Preview ( BZ#1952421 ) NetworkManager rebased to version 1.30.0 The NetworkManager packages have been upgraded to upstream version 1.30.0, which provides a number of enhancements and bug fixes over the version: The ipv4.dhcp-reject-servers connection property has been added to define from which DHCP server IDs NetworkManager should reject lease offers. The ipv4.dhcp-vendor-class-identifier connection property has been added to send a custom Vendor Class Identifier DHCP option value. The active_slave bond option has been deprecated. Instead, set the primary option in the controller connection. The nm-initrd-generator utility now supports MAC addresses to indicate interfaces. The nm-initrd-generator utility generator now supports creating InfiniBand connections. The timeout of the NetworkManager-wait-online service has been increased to 60 seconds. The ipv4.dhcp-client-id=ipv6-duid connection property has been added to be compliant to RFC4361 . Additional ethtool offload features have been added. Support for the WPA3 Enterprise Suite-B 192-bit mode has been added. Support for virtual Ethernet ( veth ) devices has been added. For further information about notable changes, read the upstream release notes: NetworkManager 1.30.0 NetworkManager 1.28.0 ( BZ#1878783 ) The iproute2 utility introduces traffic control actions to add MPLS headers before Ethernet header With this enhancement, the iproute2 utility offers three new traffic control ( tc ) actions: mac_push - The act_mpls module provides this action to add MPLS labels before the original Ethernet header. push_eth - The act_vlan module provides this action to build an Ethernet header at the beginning of the packet. pop_eth - The act_vlan module provides this action to drop the outer Ethernet header. These tc actions help in implementing layer 2 virtual private network (L2VPN) by adding multiprotocol label switching (MPLS) labels before Ethernet headers. You can use these actions while adding tc filters to the network interfaces. Red Hat provides these actions as unsupported Technology Preview, because MPLS itself is a Technology Preview feature. For more information about these actions and their parameters, refer to the tc-mpls(8) and tc-vlan(8) man pages. (BZ#1861261) The nmstate API is now fully supported Nmstate, which was previously a Technology Preview, is a network API for hosts and fully supported in RHEL 8.4. The nmstate packages provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a predefined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the sections about nmstatectl in the Configuring and managing networking documentation. (BZ#1674456) New package: rshim The rhsim package provides the Mellanox BlueField rshim user-space driver, which enables accessing the rshim resources on the BlueField SmartNIC target from the external host machine. The current version of the rshim user-space driver implements device files for boot image push and virtual console access. In addition, it creates a virtual network interface to connect to the BlueField target and provides a way to access internal rshim registers. Note that in order for the virtual console or virtual network interface to be operational, the target must be running a tmfifo driver. (BZ#1744737) iptraf-ng rebased to 1.2.1 The iptraf-ng packages have been rebased to upstream version 1.2.1, which provides several bug fixes and improvements. Most notably: The iptraf-ng application no longer causes 100% CPU usage when showing the detailed statistics of a deleted interface. The unsafe handling arguments of printf() functions have been fixed. Partial support for IP over InfiniBand (IPoIB) interface has been added. Because the kernel does not provide the source address on the interface, you cannot use this feature in the LAN station monitor mode. Packet capturing abstraction has been added to allow iptraf-ng to capture packets at multi-gigabit speed. You can now scroll using the Home , End , Page up , and Page down keyboard keys. The application now shows the dropped packet count. ( BZ#1906097 ) 4.8. Kernel Kernel version in RHEL 8.4 Red Hat Enterprise Linux 8.4 is distributed with the kernel version 4.18.0-305. See also Important Changes to External Kernel Parameters and Device Drivers . ( BZ#1839151 ) Extended Berkeley Packet Filter for RHEL 8.4 The Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes a special assembly-like code. The eBPF bytecode first loads to the kernel, followed by its verification, code translation to the native machine code with just-in-time compilation, and then the virtual machine executes the code. Red Hat ships numerous components that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.4, the following eBPF components are supported: The BPF Compiler Collection (BCC) tools package, which provides tools for I/O analysis, networking, and monitoring of Linux operating systems using eBPF . The BCC library which allows the development of tools similar to those provided in the BCC tools package. The eBPF for Traffic Control (tc) feature, which enables programmable packet processing inside the kernel network data path. The eXpress Data Path (XDP) feature, which provides access to received packets before the kernel networking stack processes them, is supported under specific conditions. The libbpf package, which is crucial for bpf related applications like bpftrace and bpf/xdp development. The xdp-tools package, which contains userspace support utilities for the XDP feature, is now supported on the AMD and Intel 64-bit architectures. This includes the libxdp library, the xdp-loader utility for loading XDP programs, the xdp-filter example program for packet filtering, and the xdpdump utility for capturing packets from a network interface with XDP enabled. Note that all other eBPF components are available as Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as Technology Preview: The bpftrace tracing language The AF_XDP socket for connecting the eXpress Data Path (XDP) path to user space For more information regarding the Technology Preview components, see Technology Previews . ( BZ#1780124 ) New package: kmod-redhat-oracleasm This update adds the new kmod-redhat-oracleasm package, which provides the kernel module part of the ASMLib utility. Oracle Automated Storage Management (ASM) is a data volume manager for Oracle databases. ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices. (BZ#1827015) The xmon program changes to support Secure Boot and kernel_lock resilience against attacks If the Secure Boot mechanism is disabled, you can set the xmon program into read-write mode ( xmon=rw ) on the kernel command-line. However, if you specify xmon=rw and boot into Secure Boot mode, the kernel_lockdown feature overrides xmon=rw and changes it to read-only mode. The additional behavior of xmon depending on Secure Boot enablement is listed below: Secure Boot is on: xmon=ro (default) A stack trace is printed Memory read works Memory write is blocked Secure Boot is off: Possibility to set xmon=rw A stack trace is always printed Memory read always works Memory write is permitted only if xmon=rw These changes to xmon behavior aim to support the Secure Boot and kernel_lock resilience against attackers with root permissions. For information how to configure kernel command-line parameters, see Configuring kernel command-line parameters on the Customer Portal. (BZ#1952161) Cornelis Omni-Path Architecture (OPA) Host Software Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.4. OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Omni-Path Architecture, see: Cornelis Omni-Path Fabric Software Release Notes file. ( BZ#1960412 ) SLAB cache merging disabled by default The CONFIG_SLAB_MERGE_DEFAULT kernel configuration option has been disabled, and now SLAB caches are not merged by default. This change aims to enhance the allocator's reliability and traceability of cache usage. If the slab-cache merging behavior was desirable, the user can re-enable it by adding the slub_merge parameter to the kernel command-line. For more information on how to set the kernel command-line parameters, see the Configuring kernel command-line parameters on Customer Portal. (BZ#1871214) The ima-evm-utils package rebased to version 1.3.2 The ima-evm-utils package has been upgraded to version 1.3.2, which provides multiple bug fixes and enhancements. Notable changes include: Added support for handling the Trusted Platform Module (TPM2) multi-banks feature Extended the boot aggregate value to Platform Configuration Registers (PCRs) 8 and 9 Preloaded OpenSSL engine through a CLI parameter Added support for Intel Task State Segment (TSS2) PCR reading Added support for the original Integrity Measurement Architecture (IMA) template Both the libimaevm.so.0 and libimaevm.so.2 libraries are part of ima-evm-utils . Users of libimaevm.so.0 will not be affected, when their more recent applications use libimaevm.so.2 . (BZ#1868683) Levelling IMA and EVM features across supported CPU architectures All CPU architectures, except ARM, have a similar level of feature support for Integrity Measurement Architecture (IMA) and Extended Verification Module (EVM) technologies. The enabled functionalities are different for each CPU architecture. The following are the most significant changes for each supported CPU architecture: IBM Z: IMA appraise and trusted keyring enablement. AMD64 and Intel 64: specific architecture policy in secure boot state. IBM Power System (little-endian): specific architecture policy in secure and trusted boot state. SHA-256 as default hash algorithm for all supported architectures. For all architectures, the measurement template has changed to IMA-SIG The template includes the signature bits when present. Its format is d-ng|n-ng|sig . The goal of this update is to decrease the level of feature difference in IMA and EVM, so that userspace applications can behave equally across all supported CPU architectures. (BZ#1869758) Proactive compaction is now included in RHEL 8 as disabled-by-default With ongoing workload activity, system memory becomes fragmented. The fragmentation can result in capacity and performance problems. In some cases, program errors are also possible. Thereby, the kernel relies on a reactive mechanism called memory compaction. The original design of the mechanism is conservative, and the compaction activity is initiated on demand of allocation request. However, reactive behavior tends to increase the allocation latency if the system memory is already heavily fragmented. Proactive compaction improves the design by regularly initiating memory compaction work before a request for allocation is made. This enhancement increases the chances that memory allocation requests find the physically contiguous blocks of memory without the need of memory compaction producing those on-demand. As a result, latency for specific memory allocation requests is lowered. Warning Proactive compaction can result in increased compaction activity. This might have serious, system-wide impact, because memory pages that belong to different processes are moved and remapped. Therefore, enabling proactive compaction requires utmost care to avoid latency spikes in applications. (BZ#1848427) EDAC support has been added in RHEL 8 With this update, RHEL 8 supports the Error Detection and Correction (EDAC) kernel module set in 8th and 9th generation Intel Core Processors (CoffeeLake). The EDAC kernel module mainly handles Error Code Correction (ECC) memory and detect and report PCI bus parity errors. (BZ#1847567) A new package: kpatch-dnf The kpatch-dnf package provides a DNF plugin, which makes it possible to subscribe a RHEL system to kernel live patch updates. The subscription will affect all kernels currently installed on the system, including kernels that will be installed in the future. For more details about kpatch-dnf , see the dnf-kpatch(8) manual page or the Managing, monitoring, and updating the kernel documentation. (BZ#1798711) A new cgroups controller implementation for slab memory A new implementation of slab memory controller for the control groups technology is now available in RHEL 8. Currently, a single memory slab can contain objects owned by different memory control group . The slab memory controller brings improvement in slab utilization (up to 45%) and enables to shift the memory accounting from the page level to the object level. Also, this change eliminates each set of duplicated per-CPU and per-node slab caches for each memory control group and establishes one common set of per-CPU and per-node slab caches for all memory control groups . As a result, you can achieve a significant drop in the total kernel memory footprint and observe positive effects on memory fragmentation. Note that the new and more precise memory accounting requires more CPU time. However, the difference seems to be negligible in practice. (BZ#1877019) Time namespace has been added in RHEL 8 The time namespace enables the system monotonic and boot-time clocks to work with per-namespace offsets on AMD64, Intel 64, and the 64-bit ARM architectures. This feature is suited for changing the date and time inside Linux containers and for in-container adjustments of clocks after restoration from a checkpoint. As a result, users can now independently set time for each individual container. (BZ#1548297) New feature: Free memory page returning With this update, the RHEL 8 host kernel is able to return memory pages that are not used by its virtual machines (VMs) back to the hypervisor. This improves the stability and resource efficiency of the host. Note that for memory page returning to work, it must be configured in the VM, and the VM must also use the virtio_balloon device. (BZ#1839055) Supports changing the sorting order in perf top With this update, perf top can now sort samples by arbitrary event column in case multiple events in a group are sampled, instead of sorting by the first column. As a result, pressing a number key sorts the table by the matching data column. Note The column numbering starts from 0 . Using the --group-sort-idx command line option, it is possible to sort by the column number. (BZ#1851933) The kabi_whitelist package has been renamed to kabi_stablelist In accordance with Red Hat commitment to replacing problematic language, we renamed the kabi_whitelist package to kabi_stablelist in the RHEL 8.4 release. (BZ#1867910, BZ#1886901 ) bpf rebased to version 5.9 The bpf kernel technology in RHEL 8 has been brought up-to-date with its upstream counterpart from the kernel v5.9. The update provides multiple bug fixes and enhancements. Notable changes include: Added Berkeley Packet Filter (BPF) iterator for map elements and to iterate all BPF programs for efficient in-kernel inspection. Programs in the same control group (cgroup) can share the cgroup local storage map. BPF programs can run on socket lookup. The SO_KEEPALIVE and related options are available to the bpf_setsockopt() helper. Note that some BPF programs may need changes to their source code. (BZ#1874005) The bcc package rebased to version 0.16.0 The bcc package has been upgraded to version 0.16.0, which provides multiple bug fixes and enhancements. Notable changes include: Added utilities klockstat and funcinterval Fixes in various parts of the tcpconnect manual page Fix to make the tcptracer tool output show SPORT and DPORT columns for IPv6 addresses Fix broken dependencies (BZ#1879411) bpftrace rebased to version 0.11.0 The bpftrace package has been upgraded to version 0.11.0, which provides multiple bug fixes and enhancements. Notable changes include: Added utilities threadsnoop , tcpsynbl , tcplife , swapin , setuids , and naptime Fixed failures to run of the tcpdrop.bt and syncsnoop.bt tools Fixed a failure to load the Berkeley Packet Filter (BPF) program on IBM Z architectures Fixed a symbol lookup error (BZ#1879413) libbpf rebased to version 0.2.0.1 The libbpf package has been upgraded to version 0.2.0.1, which provides multiple bug fixes and enhancements. Notable changes include: Added support for accessing Berkeley Packet Filter (BPF) map fields in the bpf_map struct from programs that have BPF Type Format (BTF) struct access Added BPF ring buffer Added bpf iterator infrastructure Improved bpf_link observability ( BZ#1919345 ) perf now supports adding or removing tracepoints from a running collector without having to stop or restart perf Previously, to add or remove tracepoints from an instance of perf record , the perf process had to be stopped. As a consequence, performance data that occurred during the time the process was stopped was not collected and, therefore, lost. With this update, you can dynamically enable and disable tracepoints being collected by perf record via the control pipe interface without having to stop the perf record process. (BZ#1844111) The perf tool now supports recording and displaying absolute timestamps for trace data With this update, perf script can now record and display trace data with absolute timestamps. Note: To display trace data with absolute timestamps, the data must be recorded with the clock ID specified. To record data with absolute timestamps, specify the clock ID: To display trace data recorded with the specified clock ID, execute the following command: (BZ#1811839) dwarves rebased to version 1.19.1 The dwarves package has been upgraded to version 1.19.1, which provides multiple bug fixes and enhancements. Notably, this update introduces a new way of checking functions from the DWARF debug data with related ftrace entries to ensure a subset of ftrace functions is generated. ( BZ#1903566 ) perf now supports circular buffers that use specified events to trigger snapshots With this update, you can create custom circular buffers that write data to a perf.data file when an event you specify is detected. As a result, perf record can run continuously in the system background without generating excess overhead by continuously writing data to a perf.data file, and only recording data you are interested in. To create a custom circular buffer using the perf tool that records event specific snapshots, use the following command: (BZ#1844086) Kernel DRBG and Jitter entropy source are compliant to NIST SP 800-90A and NIST SP 800-90B Kernel Deterministic Random Bit Generator (DRBG) and Jitter entropy source are now compliant to recommendation for random number generation using DRBG (NIST SP 800-90A) and recommendation for the entropy sources used for random bit generation (NIST SP 800-90B) specifications. As a result, applications in FIPS mode can use these sources as FIPS-compliant randomness and noise sources. (BZ#1905088) kdump now supports Virtual Local Area Network tagged team network interface This update adds support to configure Virtual Local Area Network tagged team interface for kdump . As a result, this feature now enables kdump to use a Virtual Local Area Network tagged team interface to dump a vmcore file. (BZ#1844941) kernel-rt source tree has been updated to RHEL 8.4 tree The kernel-rt source has been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.10-rt7. Both of these updates provide a number of bug fixes and enhancements. (BZ#1858099, BZ#1858105) The stalld package is now added to RHEL 8.4 distribution This update adds the stalld package to RHEL 8.4.0. stalld is a daemon that monitors threads on a system running low latency applications. It checks for job threads that have been on a run-queue without being scheduled onto a CPU for a specified threshold. When it detects a stalled thread, stalld temporarily changes the scheduling policy to SCHED_DEADLINE and assigns the thread a slice of CPU time to make forward progress. When the time slice completes or the thread blocks, the thread goes back to its original scheduling policy. (BZ#1875037) Support for CPU hotplug in the hv_24x7 and hv_gpci PMUs With this update, PMU counters correctly react to the hot-plugging of a CPU. As a result, if a hv_gpci event counter is running on a CPU that gets disabled, the counting redirects to another CPU. (BZ#1844416) Metrics for POWERPC hv_24x7 nest events are now available Metrics for POWERPC hv_24x7 nest events are now available for perf . By aggregating multiple events, these metrics provide a better understanding of the values obtained from perf counters and how effectively the CPU is able to process the workload. (BZ#1780258) hwloc rebased to version 2.2.0 The hwloc package has been upgraded to version 2.2.0, which provides the following change: The hwloc functionality can report details on Nonvolatile Memory Express (NVMe) drives including total disk size and sector size. ( BZ#1841354 ) The igc driver is now fully supported The igc Intel 2.5G Ethernet Linux wired LAN driver was introduced in RHEL 8.1 as a Technology Preview. Starting with RHEL 8.4, it is fully supported on all architectures. The ethtool utility also supports igc wired LANs. (BZ#1495358) 4.9. File systems and storage RHEL installation now supports creating a swap partition of size 16 TiB Previously, when installing RHEL, the installer created a swap partition of maximum 128 GB for automatic and manual partitioning. With this update, for automatic partitioning, the installer continues to create a swap partition of maximum 128 GB, but in case of manual partitioning, you can now create a swap partition of 16 TiB. ( BZ#1656485 ) Surprise removal of NVMe devices With this enhancement, you can surprise remove NVMe devices from the Linux operating system without notifying the operating system beforehand. This will enhance the serviceability of NVMe devices because no additional steps are required to prepare the devices for orderly removal, which ensures the availability of servers by eliminating server downtime. Note the following: Surprise removal of NVMe devices requires kernel-4.18.0-193.13.2.el8_2.x86_64 version or later. Additional requirements from the hardware platform or the software running on the platform might be necessary for successful surprise removal of NVMe devices. Surprise removing an NVMe device that is critical to the system operation is not supported. For example, you cannot remove an NVMe device that contains the operating system or a swap partition. (BZ#1634655) Stratis filesystem symlink paths have changed With this enhancement, Stratis filesystem symlink paths have changed from /stratis/ <stratis-pool> / <filesystem-name> to /dev/stratis/ <stratis-pool> / <filesystem-name> . Consequently, all existing Stratis symlinks must be migrated to utilize the new symlink paths. Use the included stratis_migrate_symlinks.sh migration script or reboot your system to update the symlink paths. If you manually changed the systemd unit files or the /etc/fstab file to automatically mount Stratis filesystems, you must update them with the new symlink paths. Note If you do not update your configuration with the new Stratis symlink paths, or if you temporarily disable the automatic mounts, the boot process might not complete the time you reboot or start your system. ( BZ#1798244 ) Stratis now supports binding encrypted pools to a supplementary Clevis encryption policy With this enhancement, you can now bind encrypted Stratis pools to Network Bound Disk Encryption (NBDE) using a Tang server, or to the Trusted Platform Module (TPM) 2.0. Binding an encrypted Stratis pool to NBDE or TPM 2.0 facilitates automatic unlocking of pools. As a result, you can access your Stratis pools without having to provide the kernel keyring description after each system reboot. Note that binding a Stratis pool to a supplementary Clevis encryption policy does not remove the primary kernel keyring encryption. ( BZ#1868100 ) New mount options to control when DAX is enabled on XFS and ext4 file systems This update introduces new mount options which, when combined with the FS_XFLAG_DAX inode flag, provide finer-grained control of the Direct Access (DAX) mode for files on XFS and ext4 file systems. In prior releases, DAX was enabled for the entire file system using the dax mount option. Now, the direct access mode can be enabled on a per-file basis. The on-disk flag, FS_XFLAG_DAX , is used to selectively enable or disable DAX for a particular file or directory. The dax mount option dictates whether or not the flag is honored: -o dax=inode - follow FS_XFLAG_DAX . This is the default when no dax option is specified. -o dax=never - never enable DAX, ignore FS_XFLAG_DAX . -o dax=always - always enable DAX, ignore FS_XFLAG_DAX . -o dax - is a legacy option which is an alias for "dax=always". This may be removed in the future, so "-o dax=always" is preferred. You can set FS_XFLAG_DAX flag by using the xfs_io utility's chatter command: (BZ#1838876, BZ#1838344) SMB Direct is now supported With this update, the SMB client now supports SMB Direct. (BZ#1887940) New API for mounting filesystems has been added With this update, a new API for mounting filesystems based on an internal kernel structure called a filesystem context ( struct fs_context ) has been added into RHEL 8.4, allowing greater flexibility in communication of mount parameters between userspace, the VFS, and the file system. Along with this, there are following system calls for operating on the file system context: fsopen() - creates a blank filesystem configuration context within the kernel for the filesystem named in the fsname parameter, adds it into creation mode, and attaches it to a file descriptor, which it then returns. fsmount() - takes the file descriptor returned by fsopen() and creates a mount object for the file system root specified there. fsconfig() - supplies parameters to and issues commands against a file system configuration context as set up by the fsopen(2) or fspick(2) system calls. fspick() - creates a new file system configuration context within the kernel and attaches a pre-existing superblock to it so that it can be reconfigured. move_mount() - moves a mount from one location to another; it can also be used to attach an unattached mount created by fsmount() or open_tree() with the OPEN_TREE_CLONE system call. open_tree() - picks the mount object specified by the pathname and attaches it to a new file descriptor or clones it and attaches the clone to the file descriptor. Note that the old API based on the mount() system call is still supported. For additional information, see the Documentation/filesystems/mount_api.txt file in the kernel source tree. (BZ#1622041) Discrepancy in vfat file system mtime no longer occurs With this update, the discrepancy in the vfat file system mtime between in-memory and on-disk write times is no longer present. This discrepancy was caused by a difference between in-memory and on-disk mtime metadata, which no longer occurs. (BZ#1533270) RHEL 8.4 now supports close_range() system call With this update, the close_range() system call was backported to RHEL 8.4. This system call closes all file descriptors in a given range effectively, preventing timing problems which are present when closing a wide range of file descriptors sequentially if applications configure very large limits. (BZ#1900674) Support for user extended attributes through the NFSv4.2 protocol has been added This update adds NFSV4.2 client-side and server-side support for user extended attributes (RFC 8276) and newly includes the following protocol extensions: New operations: - GETXATTR - get an extended attribute of a file - SETXATTR - set an extended attribute of a file - LISTXATTR - list extended attributes of a file - REMOVEXATTR - remove an extended attribute of a file New error codes: - NFS4ERR-NOXATTR - xattr does not exist - NFS4ERR_XATTR2BIG - xattr value is too big New attribute: - xattr_support - per-fs read-only attribute determines whether xattrs are supported. When set to True , the object's file system supports extended attributes. (BZ#1888214) 4.10. High availability and clusters Noncritical resources in colocation constraints are now supported With this enhancement, you can configure a colocation constraint such that if the dependent resource of the constraint reaches its migration threshold for failure, Pacemaker will leave that resource offline and keep the primary resource on its current node rather than attempting to move both resources to another node. To support this behavior, colocation constraints now have an influence option, which can be set to true or false , and resources have a critical meta-attribute, which can also be set to true or false . The value of the critical resource meta option determines the default value of the influence option for all colocation constraints involving the resource as a dependent resource. When the influence colocation constraint option has a value of true Pacemaker will attempt to keep both the primary and dependent resource active. If the dependent resource reaches its migration threshold for failures, both resources will move to another node, if possible. When the influence colocation option has a value of false , Pacemaker will avoid moving the primary resource as a result of the status of the dependent resource. In this case, if the dependent resource reaches its migration threshold for failures, it will stop if the primary resource is active and can remain on its current node. By default, the value of the critical resource meta option is set to true , which in turn determines that the default value of the influence option is true . This preserves the behavior where Pacemaker attempted to keep both resources active. ( BZ#1371576 ) New number data type supported by Pacemaker rules PCS now supports a data type of number , which you can use when defining Pacemaker rules in any PCS command that accepts rules. Pacemaker rules implement number as a double-precision floating-point number and integer as a 64-bit integer. (BZ#1869399) Ability to specify a custom clone ID when creating a clone resource or promotable clone resource When you create a clone resource or a promotable clone resource, the clone resource is named resource-id -clone by default. If that ID is already in use, PCS adds the suffix - integer , starting with an integer value of 1 and incrementing by one for each additional clone. You can now override this default by specifying a name for a clone resource ID or promotable clone resource ID with the clone-id option when creating a clone resource with the pcs resource create or the pcs resource clone command. For information on creating clone resources, see Creating cluster resources that are active on multiple nodes . ( BZ#1741056 ) New command to display Corosync configuration You can now print the contents of the corosync.conf file in several output formats with the new pcs cluster config [show] command. By default, the pcs cluster config command uses the text output format, which displays the Corosync configuration in a human-readable form, with the same structure and option names as the pcs cluster setup and pcs cluster config update commands. ( BZ#1667066 ) New command to modify the Corosync configuration of an existing cluster You can now modify the parameters of the corosync.conf file with the new pcs cluster config update command. You can use this command, for example, to increase the totem token to avoid fencing during temporary system unresponsiveness. For information on modifying the corosync.conf file, see Modifying the corosync.conf file with the pcs command . ( BZ#1667061 ) Enabling and disabling Corosync traffic encryption in an existing cluster Previously, you could configure Corosync traffic encryption only when creating a new cluster. With this update: You can change the configuration of the Corosync crypto cipher and hash with the pcs cluster config update command. You can change the Corosync authkey with the pcs cluster authkey corosync command. ( BZ#1457314 ) New crypt resource agent for shared and encrypted GFS2 file systems RHEL HA now supports a new crypt resource agent, which allows you to configure a LUKS encrypted block device that can be used to provide shared and encrypted GFS2 file systems. Using the crypt resource is currently supported only with GFS2 file systems. For information on configuring an encrypted GFS2 file system, see Configuring an encrypted GFS2 file system in a cluster . (BZ#1471182) 4.11. Dynamic programming languages, web and database servers A new module: python39 RHEL 8.4 introduces Python 3.9, provided by the new module python39 and the ubi8/python-39 container image. Notable enhancements compared to Python 3.8 include: The merge ( | ) and update ( |= ) operators have been added to the dict class. Methods to remove prefixes and suffixes have been added to strings. Type hinting generics have been added to certain standard types, such as list and dict . The IANA Time Zone Database is now available through the new zoneinfo module. Python 3.9 and packages built for it can be installed in parallel with Python 3.8 and Python 3.6 on the same system. To install packages from the python39 module, use, for example: The python39:3.9 module stream will be enabled automatically. To run the interpreter, use, for example: See Installing and using Python for more information. Note that Red Hat will continue to provide support for Python 3.6 until the end of life of RHEL 8. Similarly to Python 3.8, Python 3.9 will have a shorter life cycle; see Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1877430) Changes in the default separator for the Python urllib parsing functions To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change has been implemented in Python 3.6 with the release of RHEL 8.4, and will be backported to Python 3.8 and Python 2.7 in the following minor release of RHEL 8. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) . Python 3.9 is unaffected and already includes the new default separator ( & ), which can be changed only by passing the separator parameter when calling the urllib.parse.parse_qsl and urllib.parse.parse_qs functions in Python code. (BZ#1935686, BZ#1928904 ) A new module stream: swig:4.0 RHEL 8.4 introduces the Simplified Wrapper and Interface Generator (SWIG) version 4.0, available as a new module stream, swig:4.0 . Notable changes over the previously released SWIG 3.0 include: The only supported Python versions are: 2.7 and 3.2 to 3.8. The Python module has been improved: the generated code has been simplified and most optimizations are now enabled by default. Support for Ruby 2.7 has been added. PHP 7 is now the only supported PHP version; support for PHP 5 has been removed. Performance has been significantly improved when running SWIG on large interface files. Support for a command-line options file (also referred to as a response file) has been added. Support for JavaScript Node.js versions 2 to 10 has been added. Support for Octave versions 4.4 to 5.1 has been added. To install the swig:4.0 module stream, use: If you want to upgrade from the swig:3.0 stream, see Switching to a later stream . For information about the length of support for the swig module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . ( BZ#1853639 ) A new module stream: subversion:1.14 RHEL 8.4 introduces a new module stream, subversion:1.14 . Subversion 1.14 is the most recent Long Term Support (LTS) release. Notable changes since Subversion 1.10 distributed in RHEL 8.0 include: Subversion 1.14 includes Python 3 bindings for automation and integration of Subversion into the customer's build and release infrastructure. A new svnadmin rev-size command enables users to determine the total size of a revision. A new svnadmin build-repcache command enables administrators to populate the rep-cache database with missing entries. A new experimental command has been added to provide an overview of the current working copy status. Various improvements to the svn log , svn info , and svn list commands have been implemented. For example, svn list --human-readable now uses human-readable units for file sizes. Significant improvements to svn status for large working copies have been made. Compatibility information: Subversion 1.10 clients and servers interoperate with Subversion 1.14 servers and clients. However, certain features might not be available unless both client and server are upgraded to the latest version. Repositories created under Subversion 1.10 can be successfully loaded in Subversion 1.14 . Subversion 1.14 distributed in RHEL 8 enables users to cache passwords in plain text on the client side. This behaviour is the same as Subversion 1.10 but different from the upstream release of Subversion 1.14 . The experimental Shelving feature has been significantly changed, and it is incompatible with shelves created in Subversion 1.10 . See the upstream documentation for details and upgrade instructions. The interpretation of path-based authentication configurations with both global and repository-specific rules has changed in Subversion 1.14 . See the upstream documentation for details on affected configurations. To install the subversion:1:14 module stream, use: If you want to upgrade from the subversion:1.10 stream, see Switching to a later stream . For information about the length of support for the subversion module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . ( BZ#1844947 ) A new module stream: redis:6 Redis 6 , an advanced key-value store, is now available as a new module stream, redis:6 . Notable changes over Redis 5 include: Redis now supports SSL on all channels. Redis now supports Access Control List (ACL), which defines user permissions for command calls and key pattern access. Redis now supports a new RESP3 protocol, which returns more semantical replies. Redis can now optionally use threads to handle I/O. Redis now offers server-side support for client-side caching of key values. The Redis active expire cycle has been improved to enable faster eviction of expired keys. Redis 6 is compatible with Redis 5 , with the exception of this backward incompatible change: When a set key does not exist, the SPOP <count> command no longer returns null. In Redis 6 , the command returns an empty set in this scenario, similar to a situation when it is called with a 0 argument. To install the redis:6 module stream, use: If you want to upgrade from the redis:5 stream, see Switching to a later stream . For information about the length of support for the redis module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1862063) A new module stream: postgresql:13 RHEL 8.4 introduces PostgreSQL 13 , which provides a number of new features and enhancements over version 12. Notable changes include: Performance improvements resulting from de-duplication of B-tree index entries Improved performance for queries that use aggregates or partitioned tables Improved query planning when using extended statistics Parallelized vacuuming of indexes Incremental sorting Note that support for Just-In-Time (JIT) compilation, available in upstream since PostgreSQL 11 , is not provided by the postgresql:13 module stream. See also Using PostgreSQL . To install the postgresql:13 stream, use: If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data as described in Migrating to a RHEL 8 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1855776) A new module stream: mariadb:10.5 MariaDB 10.5 is now available as a new module stream, mariadb:10.5 . Notable enhancements over the previously available version 10.3 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB supports a new FLUSH SSL command to reload SSL certificates without a server restart. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. MariaDB supports a new INET6 data type for storing IPv6 addresses. MariaDB now uses the Perl Compatible Regular Expressions (PCRE) library version 2. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. MariaDB adds a new global variable, binlog_row_metadata , as well as system variables and status variables to control the amount of metadata logged. The default value of the eq_range_index_dive_limit variable has been changed from 0 to 200 . A new SHUTDOWN WAIT FOR ALL SLAVES server command and a new mysqladmin shutdown --wait-for-all-slaves option have been added to instruct the server to shut down only after the last binlog event has been sent to all connected replicas. In parallel replication, the slave_parallel_mode variable now defaults to optimistic . The InnoDB storage engine introduces the following changes: InnoDB now supports an instant DROP COLUMN operation and enables users to change the column order. Defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . Several InnoDB variables have been removed or deprecated. MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. See also Using MariaDB . To install the mariadb:10.5 stream, use: If you want to upgrade from the mariadb:10.3 module stream, see Upgrading from MariaDB 10.3 to MariaDB 10.5 . For information about the length of support for the mariadb module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1855781) MariaDB 10.5 provides the PAM plug-in version 2.0 MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new package, mariadb-pam . This package contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. Note that the mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the mariadb-pam package manually. See also known issue PAM plug-in version 1.0 does not work in MariaDB . ( BZ#1936842 ) A new package: mysql-selinux RHEL 8.4 adds a new mysql-selinux package that provides an SELinux module with rules for the MariaDB and MySQL databases. The package is installed by default with the database server. The module's priority is set to 200 . (BZ#1895021) python-PyMySQL rebased to version 0.10.1 The python-PyMySQL package, which provides the pure-Python MySQL client library, has been updated to version 0.10.1. The package is included in the python36 , python38 , and python39 modules. Notable changes include: This update adds support for the ed25519 and caching_sha2_password authentication mechanisms. The default character set in the python38 and python39 modules is utf8mb4 , which aligns with upstream. The python36 module preserves the default latin1 character set to maintain compatibility with earlier versions of this module. In the python36 module, the /usr/lib/python3.6/site-packages/pymysql/tests/ directory is no longer available. ( BZ#1820628 , BZ#1885641 ) A new package: python3-pyodbc This update adds the python3-pyodbc package to RHEL 8. The pyodbc Python module provides access to Open Database Connectivity (ODBC) databases. This module implements the Python DB API 2.0 specification and can be used with third-party ODBC drivers. For example, you can now use the Performance Co-Pilot ( pcp ) to monitor performance of the SQL Server. (BZ#1881490) A new package: micropipenv A new micropipenv package is now available. It provides a lightweight wrapper for the pip package installer to support Pipenv and Poetry lock files. Note that the micropipenv package is distributed in the AppStream repository and is provided under the Compatibility level 4. For more information, see the Red Hat Enterprise Linux 8 Application Compatibility Guide . (BZ#1849096) New packages: py3c-devel and py3c-docs RHEL 8.4 introduces new py3c-devel and py3c-docs packages, which simplify porting C extensions to Python 3. These packages include a detailed guide and a set of macros for easier porting. Note that the py3c-devel and py3c-docs packages are distributed through the unsupported CodeReady Linux Builder (CRB) repository . (BZ#1841060) Enhanced ProxyRemote directive for configuring httpd The ProxyRemote configuration directive in the Apache HTTP Server has been enhanced to optionally take user name and password credentials. These credentials are used for authenticating to the remote proxy using HTTP Basic authentication. This feature has been backported from httpd 2.5 . (BZ#1869576) Non-end-entity certificates can be used with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath httpd directives With this update, you can use non-end-entity (non-leaf) certificates, such as a Certificate Authority (CA) or intermediate certificate, with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath configuration directives in the Apache HTTP Server. The Apache HTTP server now treats such certificates as trusted CAs, as if they were used with the SSLProxyMachineCertificateChainFile directive. Previously, if non-end-entity certificates were used with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath directives, httpd failed to start with a configuration error. (BZ#1883648) A new SecRemoteTimeout directive in the mod_security module Previously, you could not modify the default timeout for retrieving remote rules in the mod_security module for the Apache HTTP Server. With this update, you can set a custom timeout in seconds using the new SecRemoteTimeout configuration directive. When the timeout has been reached, httpd now fails with an error message Timeout was reached . Note that in this scenario, the error message also contains Syntax error even if the configuration file is syntactically valid. The httpd behavior upon timeout depends on the value of the SecRemoteRulesFailAction configuration directive (the default value is Abort ). ( BZ#1824859 ) The mod_fcgid module can now pass up to 1024 environment variables to an FCGI server process With this update, the mod_fcgid module for the Apache HTTP Server can pass up to 1024 environment variables to a FastCGI (FCGI) server process. The limit of 64 environment variables could cause applications running on the FCGI server to malfunction. ( BZ#1876525 ) perl-IO-String is now available in the AppStream repository The perl-IO-String package, which provides the Perl IO::String module, is now distributed through the supported AppStream repository. In releases of RHEL 8, the perl-IO-String package was available in the unsupported CodeReady Linux Builder repository. (BZ#1890998) A new package: quota-devel RHEL 8.4 introduces the quota-devel package, which provides header files for implementing the quota Remote Procedure Call (RPC) service. Note that the quota-devel package is distributed through the unsupported CodeReady Linux Builder (CRB) repository . ( BZ#1868671 ) 4.12. Compilers and development tools The glibc library now supports glibc-hwcaps subdirectories for loading optimized shared library implementations On certain architectures, hardware upgrades sometimes caused glibc to load libraries with baseline optimizations, rather than optimized libraries for the hardware generation. Additionally, when running on AMD CPUs, optimized libraries were not loaded at all. With this enhancement, glibc supports locating optimized library implementations in the glibc-hwcaps subdirectories. The dynamic loader checks for library files in the sub-directories based on the CPU in use and its hardware capabilities. This feature is available on following architectures: IBM Power Systems (little endian), IBM Z, 64-bit AMD and Intel. (BZ#1817513) The glibc dynamic loader now activates selected audit modules at run time Previously, the binutils link editor ld supported the --audit option to select audit modules for activation at run time, but the glibc dynamic loader ignored the request. With this update, the glib dynamic loader no longer ignores the request, and loads the indicated audit modules. As a result, it is possible to activate audit modules for specific programs without writing wrapper scripts or using similar mechanisms. ( BZ#1871385 ) glibc now provides improved performance on IBM POWER9 This update introduces new implementations of the functions strlen , strcpy , stpcpy , and rawmemchr for IBM POWER9. As a result, these functions now execute faster on IBM POWER9 hardware which leads to performance gains. ( BZ#1871387 ) Optimized performance of memcpy and memset on IBM Z With this enhancement, the core library implementation for the memcpy and memset APIs were adjusted to accelerate both small (< 64KiB) and larger data copies on IBM Z processors. As a result, applications working with in-memory data now benefit from significantly improved performance across a wide variety of workloads. ( BZ#1871395 ) GCC now supports the ARMv8.1 LSE atomic instructions With this enhancement, the GCC compiler now supports Large System Extensions (LSE), atomic instructions added with the ARMv8.1 specification. These instructions provide better performance in multi-threaded applications than the ARMv8.0 Load-Exclusive and Store-Exclusive instructions. (BZ#1821994) GCC now emits vector alignment hints for certain IBM Z systems This update enables the GCC compiler to emit vector load and store alignment hints for IBM z13 processors. To use this enhancement the assembler must support such hints. As a result, users now benefit from improved performance of certain vector operations. (BZ#1850498) Dyninst rebased to version 10.2.1 The Dyninst binary analysis and modification tool has been updated to version 10.2.1. Notable bug fixes and enhancements include: Support for the elfutils debuginfod client library. Improved parallel binary code analysis. Improved analysis and instrumentation of large binaries. ( BZ#1892001 ) elfutils rebased to version 0.182 The elfutils package has been updated to version 0.182. Notable bug fixes and enhancements include: Recognizes the DW_CFA_AARCH64_negate_ra_state instruction. When Pointer Authentication Code (PAC) is not enabled, you can use DW_CFA_AARCH64_negate_ra_state to unwind code that is compiled for PAC on the 64-bit ARM architecture. elf_update now fixes bad sh_addralign values in sections that have set the SHF_COMPRESSED flag. debuginfod-client now supports kernel ELF images compressed with ZSTD. debuginfod has a more efficient package traversal, tolerating various errors during scanning. The grooming process is more visible and interruptible, and provides more Prometheus metrics. ( BZ#1875318 ) SystemTap rebased to version 4.4 The SystemTap instrumentation tool has been updated to version 4.4, which provides multiple bug fixes and enhancements. Notable changes include: Performance and stability improvements to user-space probing. Users can now access implicit thread local storage variables on these architectures: AMD64, Intel 64, IBM Z, the little-endian variant of IBM Power Systems. Initial support for processing of floating point values. Improved concurrency for scripts using global variables. The locks required to protect concurrent access to global variables have been optimized so that they span the smallest possible critical region. New syntax for defining aliases with both a prologue and an epilogue. New @probewrite predicate. syscall arguments are writable again. For further information about notable changes, read the upstream release notes before updating. ( BZ#1875341 ) Valgrind now supports IBM z14 instructions With this update, the Valgrind tool suite supports instructions for the IBM z14 processor. As a result, you can now use the Valgrind tools to debug programs using the z14 vector instructions and the miscellaneous z14 instruction set. (BZ#1504123) CMake rebased to version 3.18.2 The CMake build system has been upgraded from version 3.11.4 to version 3.18.2. It is available in RHEL 8.4 as the cmake-3.18.2-8.el8 package. To use CMake on a project that requires the version 3.18.2 or less, use the command cmake_minimum_required(version x.y.z) . For further information on new features and deprecated functionalities, see the CMake Release Notes . ( BZ#1816874 ) libmpc rebased to version 1.1.0 The libmpc package has been rebased to version 1.1.0, which provides several enhancements and bug fixes over the version. For details, see GNU MPC 1.1.0 release notes . ( BZ#1835193 ) Updated GCC Toolset 10 GCC Toolset 10 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced with RHEL 8.4 include: The GCC compiler has been updated to the upstream version, which provides multiple bug fixes. elfutils has been updated to version 0.182. Dyninst has been updated to version 10.2.1. SystemTap has been updated to version 4.4. The following tools and versions are provided by GCC Toolset 10: Tool Version GCC 10.2.1 GDB 9.2 Valgrind 3.16.0 SystemTap 4.4 Dyninst 10.2.1 binutils 2.35 elfutils 0.182 dwz 0.12 make 4.2.1 strace 5.7 ltrace 0.7.91 annobin 9.29 To install GCC Toolset 10, run the following command as root: To run a tool from GCC Toolset 10: To run a shell session where tool versions from GCC Toolset 10 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 10 components are available in the two container images: rhel8/gcc-toolset-10-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-10-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 10 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. For details regarding the container images, see Using the GCC Toolset container images . (BZ#1918055) GCC Toolset 10: GCC now supports bfloat16 In GCC Toolset 10, the GCC compiler now supports the bfloat16 extension through ACLE Intrinsics. This enhancement provides high-performance computing. (BZ#1656139) GCC Toolset 10: GCC now supports ENQCMD and ENQCMDS instructions on Intel Sapphire Rapids processors In GCC Toolset 10, the GNU Compiler Collection (GCC) now supports the ENQCMD and ENQCMDS instructions, which you can use to submit work descriptors to devices automatically. To apply this enhancement, run GCC with the -menqcmd option. (BZ#1891998) GCC Toolset 10: Dyninst rebased to version 10.2.1 In GCC Toolset 10, the Dyninst binary analysis and modification tool has been updated to version 10.2.1. Notable bug fixes and enhancements include: Support for the elfutils debuginfod client library. Improved parallel binary code analysis. Improved analysis and instrumentation of large binaries. ( BZ#1892007 ) GCC Toolset 10: elfutils rebased to version 0.182 In GCC Toolset 10, the elfutils package has been updated to version 0.182. Notable bug fixes and enhancements include: Recognizes the DW_CFA_AARCH64_negate_ra_state instruction. When Pointer Authentication Code (PAC) is not enabled, you can use DW_CFA_AARCH64_negate_ra_state to unwind code that is compiled for PAC on the 64-bit ARM architecture. elf_update now fixes bad sh_addralign values in sections that have set the SHF_COMPRESSED flag. debuginfod-client now supports kernel ELF images compressed with ZSTD. debuginfod has a more efficient package traversal, tolerating various errors during scanning. The grooming process is more visible and interruptible, and provides more Prometheus metrics. ( BZ#1879758 ) Go Toolset rebased to version 1.15.7 Go Toolset has been upgraded to 1.15.7. Notable enhancements include: Linking is now faster and requires less memory due to the newly implemented object file format and increased concurrency of internal phases. With this enhancement, internal linking is now the default. To disable this setting, use the compiler flag -ldflags=-linkmode=external . Allocating small objects has been improved for high core counts, including worst-case latency. Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are specified is now disabled by default. To enable it, add the value x509ignoreCN=0 to the GODEBUG environment variable. GOPROXY now supports skipping proxies that return errors. Go now includes the new package time/tzdata . It enables you to embed the timezone database into a program even if the timezone database is not available on your local system. For more information on Go Toolset, go to Using Go Toolset . (BZ#1870531) Rust Toolset rebased to version 1.49.0 Rust Toolset has been updated to version 1.49.0. Notable changes include: You can now use the path of a rustdoc page item to link to it in rustdoc. The rust test framework now hides thread output. Output of failed tests still show in the terminal. You can now use [T; N]: TryFrom<Vec<T>> to turn a vector into an array of any length. You can now use slice::select_nth_unstable to perform ordered partitioning. This function is also available with the following variants: slice::select_nth_unstable_by provides a comparator function. slice::select_nth_unstable_by_key provides a key extraction function. You can now use ManuallyDrop as the type of a union field. It is also possible to use impl Drop for Union to add the Drop trait to existing unions. This makes it possible to define unions where certain fields need to be dropped manually. Container images for Rust Toolset have been deprecated and Rust Toolset has been added to the Universal Base Images (UBI) repositories. For further information, see Using Rust Toolset . (BZ#1896712) LLVM Toolset rebased to version 11.0.0 LLVM Toolset has been upgraded to version 11.0.0. Notable changes include: Support for the -fstack-clash-protection command-line option has been added to the AMD and Intel 64-bit architectures, IBM Power Systems, Little Endian, and IBM Z. This new compiler flag protects from stack-clash attacks by automatically checking each stack page. The new compiler flag ffp-exception-behavior={ignore,maytrap,strict} enables the specification of floating-point exception behavior. The default setting is ignore . The new compiler flag ffp-model={precise,strict,fast} allows the simplification of single purpose floating-point options. The default setting is precise . The new compiler flag -fno-common is now enabled by default. With this enhancement, code written in C using tentative variable definitions in multiple translation units now triggers multiple-definition linker errors. To disable this setting, use the -fcommon flag. Container images for LLVM Toolset have been deprecated and LLVM Toolset has been added to the Universal Base Images (UBI) repositories. For more information, see Using LLVM Toolset . (BZ#1892716) pcp rebased to version 5.2.5 The pcp package has been upgraded to version 5.2.5. Notable changes include: SQL Server metrics support via a secure connection. eBPF/BCC netproc module with per-process network metrics. pmdaperfevent(1) support for the hv_24x7 core-level and hv_gpci event metrics. New Linux process accounting metrics, Linux ZFS metrics, Linux XFS metric, Linux kernel socket metrics, Linux multipath TCP metrics, Linux memory and ZRAM metrics, and S.M.A.R.T. metric support for NVM Express disks. New pcp-htop(1) utility to visualize the system and process metrics. New pmrepconf(1) utility to generate the pmrep/pcp2xxx configurations. New pmiectl(1) utility for controlling the pmie services. New pmlogctl(1) utility for controlling the pmlogger services. New pmlogpaste(1) utility for writing log string metrics. New pcp-atop(1) utility to process accounting statistics and per-process network statistics reporting. New pmseries(1) utility to query functions, language extensions, and REST API. New pmie(1) rules for detecting OOM kills and socket connection saturation. Bug fixes in the pcp-atopsar(1) , pcp-free(1) , pcp-dstat(1) , pmlogger(1) , and pmchart(1) utilities. REST API and C API support for per-context derived metrics. Improved OpenMetrics metric metadata (units, semantics). Rearranged installed /var file system layouts extensively. ( BZ#1854035 ) Accessing remote hosts through a central pmproxy for the Vector data source in grafana-pcp In some environments, the network policy does not allow connections from the dashboard viewer's browser to the monitored hosts directly. This update makes it possible to customize the hostspec in order to connect to a central pmproxy , which forwards the requests to the individual hosts. ( BZ#1845592 ) grafana rebased to version 7.3.6 The grafana package has been upgraded to version 7.3.6. Notable changes include: New panel editor and new data transformations feature Improved time zone support Default provisioning path now changed from the /usr/share/grafana/conf/provisioning to the /etc/grafana/provisioning directory. You can configure this setting in the /etc/grafana/grafana.ini configuration file. For more information, see What's New in Grafana v7.0 , What's New in Grafana v7.1 , What's New in Grafana v7.2 , and What's New in Grafana v7.3 . ( BZ#1850471 ) grafana-pcp rebased to version 3.0.2 The grafana-pcp package has been upgraded to version 3.0.2. Notable changes include: Redis: Supports creating an alert in Grafana. Using the label_values(metric, label) in a Grafana variable query is deprecated due to performance reasons. The label_values(label) query is still supported. Vector: Supports derived metrics, which allows the usage of arithmetic operators and statistical functions inside a query. For more information, see the pmRegisterDerived(3) man page. Configurable hostspec, where you can access remote Performance Metrics Collector Daemon (PMCDs) through a central pmproxy . Automatically configures the unit of the panel. Dashboards: Detects potential performance issues and shows possible solutions with the checklist dashboards, using the Utilization Saturation and Errors (USE) method. New MS SQL server dashboard, eBPF/BCC dashboard, and container overview dashboard with the CGroups v2 . All dashboards are now located in the Dashboards tab in the Datasource settings pages and are not imported automatically. Upgrade notes: Update the Grafana configuration file: Edit the /etc/grafana/grafana.ini Grafana configuration file and make sure that the following option is set: Restart the Grafana server: ( BZ#1854093 ) Active Directory authentication for accessing SQL Server metrics in PCP With this update, a system administrator can configure pmdamssql(1) to connect securely to the SQL Server metrics using Active Directory (AD) authentication. ( BZ#1847808 ) grafana-container rebased to version 7.3.6 The rhel8/grafana container image provides Grafana. Grafana is an open source utility with metrics dashboard, and graphic editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, InfluxDB, and Performance Co-Pilot (PCP). The grafana-container package has been upgraded to version 7.3.6. Notable changes include: The grafana package is now updated to version 7.3.6. The grafana-pcp package is now updated to version 3.0.2. The rebase updates the rhel8/grafana image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1916154 ) pcp-container rebased to version 5.2.5 The rhel8/pcp container image provides Performance Co-Pilot, which is a system performance analysis toolkit. The pcp-container package has been upgraded to version 5.2.5. Notable changes include: The pcp package is now updated to version 5.2.5. Introduced a new PCP_SERVICES environment variable, which specifies a comma-separated list of PCP services to start inside the container. The rebase updates the rhel8/pcp image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1916155 ) JDK Mission Control rebased to version 8.0.0 The JDK Mission Control (JMC) profiler for HotSpot JVMs, provided by the jmc:rhel8 module stream, has been upgraded to version 8.0.0. Notable enhancements include: The Treemap viewer has been added to the JOverflow plug-in for visualizing memory usage by classes. The Threads graph has been enhanced with more filtering and zoom options. JDK Mission Control now provides support for opening JDK Flight Recorder recordings compressed with the LZ4 algorithm. New columns have been added to the Memory and TLAB views to help you identify areas of allocation pressure. Graph view has been added to improve visualization of stack traces. The Percentage column has been added to histogram tables. JMC in RHEL 8 requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 8 so that JMC can access JDK Flight Recorder features. The jmc:rhel8 module stream has two profiles: The common profile, which installs the entire JMC application The core profile, which installs only the core Java libraries ( jmc-core ) To install the common profile of the jmc:rhel8 module stream, use: Change the profile name to core to install only the jmc-core package. (BZ#1919283) 4.13. Identity Management Making Identity Management more inclusive Red Hat is committed to using conscious language. In Identity Management, planned terminology replacements include: block list replaces blacklist allow list replaces whitelist secondary replaces slave The word master is going to be replaced with more precise language, depending on the context: IdM server replaces IdM master CA renewal server replaces CA renewal master CRL publisher server replaces CRL master multi-supplier replaces multi-master (JIRA:RHELPLAN-73418) The dsidm utility supports renaming and moving entries With this enhancement, you can use the dsidm utility to rename and move users, groups, POSIX groups, roles, and organizational units (OU) in Directory Server. For further details and examples, see the Renaming Users, Groups, POSIX Groups, and OUs section in the Directory Server Administration Guide. ( BZ#1859218 ) Deleting Sub-CAs in IdM With this enhancement, if you run the ipa ca-del command and have not disabled the Sub-CA, an error indicates the Sub-CA cannot be deleted and it must be disabled. First run the ipa ca-disable command to disable the Sub-CA and then delete it using the ipa ca-del command. Note that you cannot disable or delete the IdM CA. (JIRA:RHELPLAN-63081) IdM now supports new Ansible management role and modules RHEL 8.4 provides Ansible modules for automated management of role-based access control (RBAC) in Identity Management (IdM), an Ansible role for backing up and restoring IdM servers, and an Ansible module for location management: You can use the ipapermission module to create, modify, and delete permissions and permission members in IdM RBAC. You can use the ipaprivilege module to create, modify, and delete privileges and privilege members in IdM RBAC. You can use the iparole module to create, modify, and delete roles and role members in IdM RBAC. You can use the ipadelegation module to delegate permissions over users in IdM RBAC. You can use the ipaselfservice module to create, modify, and delete self-service access rules in IdM. You can use the ipabackup role to create, copy, and remove IdM server backups and restore an IdM server either locally or from the control node. You can use the ipalocation module to ensure the presence or absence of the physical locations of hosts, such as their data center racks. (JIRA:RHELPLAN-72660) IdM in FIPS mode now supports a cross-forest trust with AD With this enhancement, administrators can establish a cross-forest trust between an IdM domain with FIPS mode enabled and an Active Directory (AD) domain. Note that you cannot establish a trust using a shared secret while FIPS mode is enabled in IdM, see FIPS compliance . (JIRA:RHELPLAN-58629) AD users can now log in to IdM with UPN suffixes subordinate to known UPN suffixes Previously, Active Directory (AD) users could not log into Identity Management (IdM) with a Universal Principal Name (UPN) (for example, sub1.ad-example.com ) that is a subdomain of a known UPN suffix (for example, ad-example.com ) because internal Samba processes filtered subdomains as duplicates of any Top Level Names (TLNs). This update validates UPNs by testing if they are subordinate to the known UPN suffixes. As a result, users can now log in using subordinate UPN suffixes in the described scenario. ( BZ#1891056 ) IdM now supports new password policy options With this update, Identity Management (IdM) supports additional libpwquality library options: --maxrepeat Specifies the maximum number of the same character in sequence. --maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). --dictcheck Checks if the password is a dictionary word. --usercheck Checks if the password contains the username. If any of the new password policy options are set, then the minimum length of passwords is 6 characters regardless of the value of the --minlength option. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator will not be applied. To ensure consistent behavior, upgrade or update all servers to RHEL 8.4 and later. ( BZ#1340463 ) Improved Active Directory site discovery process The SSSD service now discovers Active Directory sites in parallel over connection-less LDAP (CLDAP) to multiple domain controllers to speed up site discovery in situations where some domain controllers are unreachable. Previously, site discovery was performed sequentially and, in situations where domain controllers were unreachable, a timeout eventually occurred and SSSD went offline. ( BZ#1819012 ) The default value of nsslapd-nagle has been turned off to increase the throughput Previously, the nsslapd-nagle parameter in the cn=config entry was enabled by default. As a consequence, Directory Server performed a high number of setsocketopt system calls which slowed down the server. This update changes the default value of nsslapd-nagle to off . As a result, Directory Server performs a lower number of setsocketopt system calls and can handle a higher number of operations per second. (BZ#1996076) Enabling or disabling SSSD domains within the [domain] section of the sssd.conf file With this update, you can now enable or disable an SSSD domain by modifying its respective [domain] section in the sssd.conf file. Previously, if your SSSD configuration contained a standalone domain, you still had to modify the domains option in the [sssd] section of the sssd.conf file. This update allows you to set the enabled= option in the domain configuration to true or false. Setting the enabled option to true enables a domain, even if it is not listed under the domains option in the [sssd] section of the sssd.conf file. Setting the enabled option to false disables a domain, even if it is listed under the domains option in the [sssd] section of the sssd.conf file. If the enabled option is not set, the configuration in the domains option in the [sssd] section of the sssd.conf is used. ( BZ#1884196 ) Added an option to manually control the maximum offline timeout The offline_timeout period determines the time incrementation between attempts by SSSD to go back online. Previously, the maximum possible value for this interval was hardcoded to 3600 seconds, which was adequate for general usage but resulted in issues in fast or slow changing environments. This update adds the offline_timeout_max option to manually control the maximum length of each interval, allowing you more flexibility to track the server behavior in SSSD. Note that you should set this value in correlation to the offline_timeout parameter value. A value of 0 disables the incrementing behavior. ( BZ#1884213 ) Support for exclude_users and exclude_groups with scope=all in SSSD session recording configuration Red Hat Enterprise 8.4 now provides new SSSD options for defining session recording for large lists of groups or users: exclude_users A comma-separated list of users to be excluded from recording, only applicable with the scope=all configuration option. exclude_groups A comma-separated list of groups, members of which should be excluded from recording. Only applicable with the scope=all configuration option. For more information, refer to the sssd-session-recording man page. ( BZ#1784459 ) samba rebased to version 4.13.2 The samba packages have been upgraded to upstream version 4.13.2, which provides a number of bug fixes and enhancements over the version: To avoid a security issue that allows unauthenticated users to take over a domain using the netlogon protocol, ensure that your Samba servers use the default value ( yes ) of the server schannel parameter. To verify, use the testparm -v | grep 'server schannel' command. For further details, see CVE-2020-1472 . The Samba "wide links" feature has been converted to a VFS module . Running Samba as a PDC or BDC is deprecated . You can now use Samba on RHEL with FIPS mode enabled. Due to the restrictions of the FIPS mode: You cannot use NT LAN Manager (NTLM) authentication because the RC4 cipher is blocked. By default in FIPS mode, Samba client utilities use Kerberos authentication with AES ciphers. You can use Samba as a domain member only in Active Directory (AD) or Red Hat Identity Management (IdM) environments with Kerberos authentication that uses AES ciphers. Note that Red Hat continues supporting the primary domain controller (PDC) functionality IdM uses in the background. The following parameters for less-secure authentication methods, which are only usable over the server message block version 1 (SMB1) protocol, are now deprecated: client plaintext auth client NTLMv2 auth client lanman auth client use spnego An issue with the GlusterFS write-behind performance translator, when used with Samba, has been fixed to avoid data corruption. The minimum runtime support is now Python 3.6. The deprecated ldap ssl ads parameter has been removed. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the database files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. ( BZ#1878109 ) New GSSAPI PAM module for passwordless sudo authentication with SSSD With the new pam_sss_gss.so Pluggable Authentication Module (PAM), you can configure the System Security Services Daemon (SSSD) to authenticate users to PAM-aware services with the Generic Security Service Application Programming Interface (GSSAPI). For example, you can use this module for passwordless sudo authentication with a Kerberos ticket. For additional security in an IdM environment, you can configure SSSD to grant access only to users with specific authentication indicators in their tickets, such as users that have authenticated with a smart card or a one-time password. For additional information, see Granting sudo access to an IdM user on an IdM client . ( BZ#1893698 ) Directory Server rebased to version 1.4.3.16 The 389-ds-base packages have been upgraded to upstream version 1.4.3.16, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://www.port389.org/docs/389ds/releases/release-1-4-3-16.html https://www.port389.org/docs/389ds/releases/release-1-4-3-15.html https://www.port389.org/docs/389ds/releases/release-1-4-3-14.html https://www.port389.org/docs/389ds/releases/release-1-4-3-13.html https://www.port389.org/docs/389ds/releases/release-1-4-3-12.html https://www.port389.org/docs/389ds/releases/release-1-4-3-11.html https://www.port389.org/docs/389ds/releases/release-1-4-3-10.html https://www.port389.org/docs/389ds/releases/release-1-4-3-9.html ( BZ#1862529 ) Directory Server now logs the work and operation time in RESULT entries With this update, Directory Server now logs two additional time values in RESULT entries in the /var/log/dirsrv/slapd-<instance_name>/access file: The wtime value indicates how long it took for an operation to move from the work queue to a worker thread. The optime value shows the time the actual operation took to be completed once a worker thread started the operation. The new values provide additional information about how the Directory Server handles load and processes operations. For further details, see the Access Log Reference section in the Red Hat Directory Server Configuration, Command, and File Reference. ( BZ#1850275 ) Directory Server can now reject internal unindexed searches This enhancement adds the nsslapd-require-internalop-index parameter to the cn= <database_name> ,cn=ldbm database,cn=plugins,cn=config entry to reject internal unindexed searches. When a plug-in modifies data, it has a write lock on the database. On large databases, if a plug-in then executes an unindexed search, the plug-in sometimes uses all database locks, which corrupts the database or causes the server to become unresponsive. To avoid this problem, you can now reject internal unindexed searches by enabling the nsslapd-require-internalop-index parameter. ( BZ#1851975 ) 4.14. Desktop You can configure the unresponsive application timeout in GNOME GNOME periodically sends a signal to every application to detect if the application is unresponsive. When GNOME detects an unresponsive application, it displays a dialog over the application window that asks if you want to stop the application or wait. Certain applications cannot respond to the signal in time. As a consequence, GNOME displays the dialog even when the application is working properly. With this update, you can configure the time between the signals. The setting is stored in the org.gnome.mutter.check-alive-timeout GSettings key. To completely disable the unresponsive application detection, set the key to 0. For details on configuring a GSettings key, see Working with GSettings keys on command line . (BZ#1886034) 4.15. Graphics infrastructures Intel Tiger Lake GPUs are now supported This release adds support for the Intel Tiger Lake CPU microarchitecture with integrated graphics. This includes Intel UHD Graphics and Intel Xe integrated GPUs found with the following CPU models: Intel Core i7-1160G7 Intel Core i7-1185G7 Intel Core i7-1165G7 Intel Core i7-1165G7 Intel Core i7-1185G7E Intel Core i7-1185GRE Intel Core i7-11375H Intel Core i7-11370H Intel Core i7-1180G7 Intel Core i5-1130G7 Intel Core i5-1135G7 Intel Core i5-1135G7 Intel Core i5-1145G7E Intel Core i5-1145GRE Intel Core i5-11300H Intel Core i5-1145G7 Intel Core i5-1140G7 Intel Core i3-1115G4 Intel Core i3-1115G4 Intel Core i3-1110G4 Intel Core i3-1115GRE Intel Core i3-1115G4E Intel Core i3-1125G4 Intel Core i3-1125G4 Intel Core i3-1120G4 Intel Pentium Gold 7505 Intel Celeron 6305 Intel Celeron 6305E You no longer have to set the i915.alpha_support=1 or i915.force_probe=* kernel option to enable Tiger Lake GPU support. (BZ#1882620) Intel GPUs that use the 11th generation Core microprocessors are now supported This release adds support for the 11th generation Core CPU architecture (formerly known as Rocket Lake ) with Xe gen 12 integrated graphics, which is found in the following CPU models: Intel Core i9-11900KF Intel Core i9-11900K Intel Core i9-11900 Intel Core i9-11900F Intel Core i9-11900T Intel Core i7-11700K Intel Core i7-11700KF Intel Core i7-11700T Intel Core i7-11700 Intel Core i7-11700F Intel Core i5-11500T Intel Core i5-11600 Intel Core i5-11600K Intel Core i5-11600KF Intel Core i5-11500 Intel Core i5-11600T Intel Core i5-11400 Intel Core i5-11400F Intel Core i5-11400T (BZ#1784246, BZ#1784247, BZ#1937558) Nvidia Ampere is now supported This release adds support for the Nvidia Ampere GPUs that use the GA102 or GA104 chipset. That includes the following GPU models: GeForce RTX 3060 Ti GeForce RTX 3070 GeForce RTX 3080 GeForce RTX 3090 RTX A4000 RTX A5000 RTX A6000 Nvidia A40 Note that the nouveau graphics driver does not yet support 3D acceleration with the Nvidia Ampere family. (BZ#1916583) Various updated graphics drivers The following graphics drivers have been updated to the latest upstream version: The Matrox mgag200 driver The Aspeed ast driver (JIRA:RHELPLAN-72994, BZ#1854354, BZ#1854367) 4.16. The web console Software Updates page checks for required restarts With this update, the Software Updates page in the RHEL web console checks if it is sufficient to only restart some services or running processes for updates to become effective after installation. In these cases this avoids having to reboot the machine. (JIRA:RHELPLAN-59941) Graphical performance analysis in the web console With this update the system graphs page has been replaced with a new dedicated page for analyzing the performance of a machine. To view the performance metrics, click View details and history from the Overview page. It shows current metrics and historical events based on the Utilization Saturation, and Errors (USE) method. (JIRA:RHELPLAN-59938) Web console assists with SSH key setup Previously, the web console allowed logging into remote hosts with your initial login password when Reuse my password for remote connections was selected during login. This option has been removed, and instead of that the web console now helps with setting up SSH keys for users that want automatic and password-less login to remote hosts. Check Managing remote systems in the web console for more details. (JIRA:RHELPLAN-59950) 4.17. Red Hat Enterprise Linux system roles The RELP secure transport support added to the Logging role configuration Reliable Event Logging Protocol, RELP, is a secure, reliable protocol to forward and receive log messages among rsyslog servers. With this enhancement, administrators can now benefit from the RELP, which is a useful protocol with high demands from rsyslog users, as rsyslog servers are capable of forwarding and receiving log messages over the RELP protocol. ( BZ#1889484 ) SSH Client RHEL system role is now supported Previously, there was no vendor-supported automation tooling to configure RHEL SSH in a consistent and stable manner for servers and clients. With this enhancement, you can use the RHEL system roles to configure SSH clients in a systematic and unified way, independently of the operating system version. ( BZ#1893712 ) An alternative to the traditional RHEL system roles format: Ansible Collection RHEL 8.4 introduces RHEL system roles in the Collection format, available as an option to the traditional RHEL system roles format. This update introduces the concept of a fully qualified collection name (FQCN), that consists of a namespace and the collection name. For example, the Kernel role fully qualified name is: redhat.rhel_system_roles.kernel_settings The combination of a namespace and a collection name guarantees that the objects are unique. The combination of a namespace and a collection name ensures that the objects are shared across the Collections and namespaces without any conflicts. Install the Collection using an RPM package. Ensure that you have the python3-jmespath installed on the host on which you execute the playbook: The RPM package includes the roles in both the legacy Ansible Roles format as well as the new Ansible Collection format. For example, to use the network role, perform the following steps: Legacy format: Collection format: If you are using Automation Hub and want to install the system roles Collection hosted in Automation Hub, enter the following command: Then you can use the roles in the Collection format, as previously described. This requires configuring your system with the ansible-galaxy command to use Automation Hub instead of Ansible Galaxy. See How to configure the ansible-galaxy client to use Automation Hub instead of Ansible Galaxy for more details. ( BZ#1893906 ) Metrics role supports configuration and enablement of metrics collection for SQL server via PCP The metrics RHEL system role now provides the ability to connect SQL Server, mssql with Performance Co-Pilot, pcp . SQL Server is a general purpose relational database from Microsoft. As it runs, SQL Server updates internal statistics about the operations it is performing. These statistics can be accessed using SQL queries but it is important for system and database administrators undertaking performance analysis tasks to be able to record, report, visualize these metrics. With this enhancement, users can use the metrics RHEL system role to automate connecting SQL server, mssql , with Performance Co-Pilot, pcp , which provides recording, reporting, and visualization functionality for mssql metrics. ( BZ#1893908 ) exporting-metric-data-to-elasticsearch functionality available in the Metrics RHEL system role Elasticsearch is a popular, powerful and scalable search engine. With this enhancement, by exporting metric values from the Metrics RHEL system role to the Elasticsearch, users are able to access metrics via Elasticsearch interfaces, including via graphical interfaces, REST APIs, between others. As a result, users are able to use these Elasticsearch interfaces to help diagnose performance problems and assist in other performance related tasks like capacity planning, benchmarking and so on. ( BZ#1895188 ) Support for SSHD RHEL system role Previously, there was no vendor-supported automation tooling to configure SSH RHEL system roles in a consistent and stable manner for servers and clients. With this enhancement, you can use the RHEL system roles to configure sshd servers in a systematic and unified way regardless of operating system version. ( BZ#1893696 ) Crypto Policies RHEL system role is now supported With this enhancement, RHEL 8 introduces a new feature for system-wide cryptographic policy management. By using RHEL system roles, you now can consistently and easily configure cryptographic policies on any number of RHEL 8 systems. ( BZ#1893699 ) The Logging RHEL system role now supports rsyslog behavior With this enhancement, rsyslog receives the message from Red Hat Virtualization and forwards the message to the elasticsearch . ( BZ#1889893 ) The networking RHEL system role now supports the ethtool settings With this enhancement, you can use the networking RHEL system role to configure ethtool coalesce settings of a NetworkManager connection. When using the interrupt coalescing procedure, the system collects network packets and generates a single interrupt for multiple packets. As a result, this increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. ( BZ#1893961 ) 4.18. Virtualization IBM Z virtual machines can now run up to 248 CPUs Previously, the number of CPUs that you could use in an IBM Z (s390x) virtual machine (VM), with DIAG318 enabled, was limited to 240. Now, using the Extended-Length SCCB, IBM Z VMs can run up to 248 CPUs. (JIRA:RHELPLAN-44450) HMAT is now supported on RHEL KVM With this update, ACPI Heterogeneous Memory Attribute Table (HMAT) is now supported on RHEL KVM. The ACPI HMAT optimizes memory by providing information about memory attributes, such as memory side cache attributes as well as bandwidth and latency details related to the System Physical Address (SPA) Memory Ranges. (JIRA:RHELPLAN-37817) Virtual machines can now use features of Intel Atom P5000 Processors The Snowridge CPU model name is now available for virtual machines (VMs). On hosts with Intel Atom P5000 processors, using Snowridge as the CPU type in the XML configuration of the VM exposes new features of these processors to the VM. (JIRA:RHELPLAN-37579) virtio-gpu devices now work better on virtual machines with Windows 10 and later This update extends the virtio-win drivers to also provide custom drivers for virtio-gpu devices on selected Windows platforms. As a result, the virtio-gpu devices now have improved performance on virtual machines that use Windows 10 or later as their guest systems. In addition, the devices will also benefit from future enhancements to virtio-win . ( BZ#1861229 ) Virtualization support for 3rd generation AMD EPYC processors With this update, virtualization on RHEL 8 adds support for the 3rd generation AMD EPYC processors, also known as EPYC Milan. As a result, virtual machines hosted on RHEL 8 can now use the EPYC-Milan CPU model and utilise new features that the processors provide. (BZ#1790620) 4.19. RHEL in cloud environments Automatic registration for gold images for AWS With this update, gold images of RHEL 8.4 and later for Amazon Web Services and Microsoft Azure can be configured by the user to automatically register to Red Hat Subscription Management (RHSM) and Red Hat Insights. This makes it faster and easier to configure a large number of virtual machines created from a gold image. However, if you require consuming repositories provided by RHSM, ensure that the manage_repos option in /etc/rhsm/rhsm.conf is set to 1 . For more information, please refer to Red Hat KnowledgeBase . ( BZ#1905398 , BZ#1932804 ) cloud-init is now supported on Power Systems Virtual Server in IBM Cloud With this update, the cloud-init utility can be used to configure RHEL 8 virtual machines hosted on IBM Power Systems hosts and running in the IBM Cloud Virtual Server service. ( BZ#1886430 ) 4.20. Supportability sos rebased to version 4.0 The sos package has been upgraded to version 4.0. This major version release includes a number of new features and changes. Major changes include: A new sos binary has replaced the former sosreport binary as the main entry point for the utility. sos report is now used to generate sosreport tarballs. The sosreport binary is maintained as a redirection point and now invokes sos report . The /etc/sos.conf file has been moved to /etc/sos/sos.conf , and its layout has changed as follows: The [general] section has been renamed to [global] , and may be used to specify options that are available to all sos commands and sub-commands. The [tunables] section has been renamed to [plugin_options] . Each sos component, report , collect , and clean , has its own dedicated section. For example, sos report loads options from global and from report . sos is now a Python3-only utility. Python2 is no longer supported in any capacity. sos collect sos collect formally brings the sos-collector utility into the main sos project, and is used to collect sosreports from multiple nodes simultaneously. The sos-collector binary is maintained as a redirection point and invokes sos collect . The standalone sos-collector project will no longer be independently developed. Enhancements for sos collect include: sos collect is now supported on all distributions that sos report supports, that is any distribution with a Policy defined. The --insecure-sudo option has been renamed to --nopasswd-sudo . The --threads option, used to connect simultaneously to the number of nodes, has been renamed to --jobs sos clean sos clean formally brings the functionality of the soscleaner utility into the main sos project. This subcommand performs further data obfuscation on reports, such as cleaning IP addresses, domain names, and user-provided keywords. Note: When the --clean option is used with the sos report or sos collect command, sos clean is applied on a report being generated. Thus, it is not necessary to generate a report and only after then apply the cleaner function on it. Key enhancements for sos clean include: Support for IPv4 address obfuscation. Note that this will attempt to preserve topological relationships between discovered addresses. Support for host name and domain name obfuscation. Support for user-provided keyword obfuscations. The --clean or --mask flag used with the sos report command obfuscates a report being generated. Alternatively, the following command obfuscates an already existing report: Using the former results in a single obfuscated report archive, while the latter results in two; an obfuscated archive and the un-obfuscated original. For full information on the changes contained in this release, see sos-4.0 . (BZ#1966838) 4.21. Containers Podman now supports volume plugins written for Docker Podman now has support for Docker volume plugins. These volume plugins or drivers, written by vendors and community members, can be used by Podman to create and manage container volumes. The podman volume create command now supports creation of the volume using a volume plugin with the given name. The volume plugins must be defined in the [engine.volume_plugins] section of the container.conf configuration file. Example: where testvol is the name of the plugin and /run/docker/plugins/testvol.sock is the path to the plugin socket. You can use the podman volume create --driver testvol to create a volume using a testvol plugin. (BZ#1734854) The ubi-micro container image is now available The registry.redhat.io/ubi8/ubi-micro container image is the smallest base image that uses the package manager on the underlying host to install packages, typically using Buildah or multi-stage builds with Podman. Excluding package managers and all of its dependencies increases the level of security of the image. (JIRA:RHELPLAN-56664) Support to auto-update container images is available With this enhancement, users can use the podman auto-update command to auto-update containers according to their auto-update policy. The containers have to be labeled with a specified "io.containers.autoupdate=image" label to check if the image has been updated. If it has, Podman pulls the new image and restarts the systemd unit executing the container. The podman auto-update command relies on systemd and requires a fully-specified image name to create a container. (JIRA:RHELPLAN-56661) Podman now supports secure short names Short-name aliases for images can now be configured in the registries.conf file in the [aliases] table. The short-names modes are: Enforcing: If no matching alias is found during the image pull, Podman prompts the user to choose one of the unqualified-search registries. If the selected image is pulled successfully, Podman automatically records a new short-name alias in the users USDHOME/.config/containers/short-name-aliases.conf file. If the user cannot be prompted (for example, stdin or stdout are not a TTY), Podman fails. Note that the short-name-aliases.conf file has precedence over registries.conf file if both specify the same alias. Permissive: Similar to enforcing mode but it does not fail if the user cannot be prompted. Instead, Podman searches in all unqualified-search registries in the given order. Note that no alias is recorded. Example: (JIRA:RHELPLAN-39843) container-tools:3.0 stable stream is now available The container-tools:3.0 stable module stream, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides bug fixes and enhancements over the version. For instructions how to upgrade from an earlier stream, see Switching to a later stream . (JIRA:RHELPLAN-56782) | [
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)",
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit software-transmit",
"perf record -k CLOCK_MONOTONIC sleep 1",
"perf script -F+tod",
"perf record --overwrite -e _events_to_be_collected_ --switch-output-event _snapshot_trigger_event_",
"xfs_io -c \"chattr +x\" filename",
"yum install python39 yum install python39-pip",
"python3.9 python3.9 -m pip --help",
"yum module install swig:4.0",
"yum module install subversion:1.14",
"yum module install redis:6",
"yum module install postgresql:13",
"yum module install mariadb:10.5",
"yum install gcc-toolset-10",
"scl enable gcc-toolset-10 tool",
"scl enable gcc-toolset-10 bash",
"podman pull registry.redhat.io/<image_name>",
"allow_loading_unsigned_plugins = pcp-redis-datasource",
"systemctl restart grafana-server",
"podman pull registry.redhat.io/rhel8/grafana",
"podman pull registry.redhat.io/rhel8/pcp",
"yum module install jmc:rhel8/common",
"yum install rhel-system-roles",
"--- - hosts: all roles: rhel-system-roles.network",
"--- - hosts: all roles: redhat.rhel_system_roles.network",
"ansible-galaxy collection install redhat.rhel_system_roles",
"[user@server1 ~]USD sudo sos (clean|mask) USDarchive",
"[engine.volume_plugins] testvol = \"/run/docker/plugins/testvol.sock\"",
"unqualified-search-registries=[\"registry.fedoraproject.org\", \"quay.io\"] [aliases] \"fedora\"=\"registry.fedoraproject.org/fedora\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/New-features |
Chapter 1. Using Ansible plug-ins for Red Hat Developer Hub | Chapter 1. Using Ansible plug-ins for Red Hat Developer Hub Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources. Important The Ansible plug-ins are a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. To use the Ansible plugins, see Using Ansible plug-ins for Red Hat Developer Hub . | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/using_dynamic_plugins/using-ansible-plug-ins-for-red-hat-developer-hub |
Chapter 22. console | Chapter 22. console This chapter describes the commands under the console command. 22.1. console log show Show server's console output Usage: Table 22.1. Positional Arguments Value Summary <server> Server to show console log (name or id) Table 22.2. Optional Arguments Value Summary -h, --help Show this help message and exit --lines <num-lines> Number of lines to display from the end of the log (default=all) 22.2. console url show Show server's remote console URL Usage: Table 22.3. Positional Arguments Value Summary <server> Server to show url (name or id) Table 22.4. Optional Arguments Value Summary -h, --help Show this help message and exit --novnc Show novnc console url (default) --xvpvnc Show xvpvnc console url --spice Show spice console url --rdp Show rdp console url --serial Show serial console url --mks Show webmks console url Table 22.5. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 22.6. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 22.7. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 22.8. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack console log show [-h] [--lines <num-lines>] <server>",
"openstack console url show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--novnc | --xvpvnc | --spice | --rdp | --serial | --mks] <server>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/console |
4.26. Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) | 4.26. Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) Table 4.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_mpath , the fence agent for multipath persistent reservation fencing. Table 4.27. Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the fence_mpath device. Devices (Comma delimited list) devices Comma-separated list of devices to use for the current operation. Each device must support SCSI-3 persistent reservations. Use sudo when calling third-party software sudo Use sudo (without password) when calling 3rd party software. Path to sudo binary (optional) sudo_path Path to sudo binary (default value is /usr/bin/sudo . Path to mpathpersist binary (optional) mpathpersist_path Path to mpathpersist binary (default value is /sbin/mpathpersist . Path to a directory where the fence agent can store information (optional) store_path Path to directory where fence agent can store information (default value is /var/run/cluster . Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods. When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. Key for current action key Key to use for the current operation. This key should be unique to a node and written in /etc/multipath.conf . For the "on" action, the key specifies the key use to register the local node. For the "off" action, this key specifies the key to be removed from the device(s). This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-multipath-ca |
Chapter 1. Introduction to the Identity Service (keystone) | Chapter 1. Introduction to the Identity Service (keystone) As a cloud administrator, you can manage projects, users, and roles. Projects are organizational units containing a collection of resources. You can assign users to roles within projects. Roles define the actions that those users can perform on the resources within a given project. Users can be assigned roles in multiple projects. Each Red Hat OpenStack (RHOSP) deployment must include at least one user assigned to a role within a project. As a cloud administrator, you can: Add, update, and delete projects and users. Assign users to one or more roles, and change or remove these assignments. Manage projects and users independently from each other. You can also configure user authentication with the Identity service (keystone)to control access to services and endpoints. The Identity service provides token-based authentication and can integrate with LDAP and Active Directory, so you can manage users and identities externally and synchronize the user data with the Identity service. 1.1. Resource credential files When you install Red Hat OpenStack Platform director, a resource credentials (RC) file is automatically generated: Source the stackrc file to export authentication details into your shell environment. This allows you to run commands against the local Red Hat OpenStack Platform director API. The name of the RC file generated during the installation of the overcloud is the name of the deployed stack suffixed with 'rc'. If you do not provide a custom name for your stack, then the stack is labeled overcloud . An RC file is created called overcloudrc : The overcloud RC file is referred to as overcloudrc in the documentation, regardless of the actual name of your stack. Source the overcloudrc file to export authentication details into your shell environment. This allows you to run commands against the control plane API of your overcloud cluster. The automatically generated overcloudrc file will authenticate you as the admin user to the admin project. This authentication is valuable for domain administrative tasks, such as creating provider networks or projects. 1.2. OpenStack regions A region is a division of an OpenStack deployment. Each region has its own full OpenStack deployment, including its own API endpoints, networks and compute resources. Different regions share one set of Identity service (keystone) and Dashboard service (horizon) services to provide access control and a web interface. Red Hat OpenStack Platform is deployed with a single region. By default, your overcloud region is named regionOne . You can change the default region name in Red Hat OpenStack Platform. Procedure Under parameter_defaults , define the KeystoneRegion parameter: Replace <sample_region> with a region name of your choice. Note You cannot modify the region name after you deploy the overcloud. | [
"Clear any old environment that may conflict. for key in USD( set | awk -F= '/^OS_/ {print USD1}' ); do unset \"USD{key}\" ; done export OS_CLOUD=undercloud Add OS_CLOUDNAME to PS1 if [ -z \"USD{CLOUDPROMPT_ENABLED:-}\" ]; then export PS1=USD{PS1:-\"\"} export PS1=\\USD{OS_CLOUD:+\"(\\USDOS_CLOUD)\"}\\ USDPS1 export CLOUDPROMPT_ENABLED=1 fi export PYTHONWARNINGS=\"ignore:Certificate has no, ignore:A true SSLContext object is not available\"",
"Clear any old environment that may conflict. for key in USD( set | awk '{FS=\"=\"} /^OS_/ {print USD1}' ); do unset USDkey ; done export OS_USERNAME=admin export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_NO_CACHE=True export OS_CLOUDNAME=overcloud export no_proxy=10.0.0.145,192.168.24.27 export PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available' export OS_AUTH_TYPE=password export OS_PASSWORD=mpWt4y0Qhc9oTdACisp4wgo7F export OS_AUTH_URL=http://10.0.0.145:5000 export OS_IDENTITY_API_VERSION=3 export OS_COMPUTE_API_VERSION=2.latest export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=3 export OS_REGION_NAME=regionOne Add OS_CLOUDNAME to PS1 if [ -z \"USD{CLOUDPROMPT_ENABLED:-}\" ]; then export PS1=USD{PS1:-\"\"} export PS1=\\USD{OS_CLOUDNAME:+\"(\\USDOS_CLOUDNAME)\"}\\ USDPS1 export CLOUDPROMPT_ENABLED=1 fi",
"parameter_defaults: KeystoneRegion: '<sample_region>'"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/assembly_introduction-to-the-identity-service |
8.68. ghostscript-fonts | 8.68. ghostscript-fonts 8.68.1. RHBA-2014:0260 - ghostscript-fonts bug fix update An updated ghostscript-fonts package that fixes one bug is now available for Red Hat Enterprise Linux 6. The ghostscript-fonts package contains a set of fonts that Ghostscript, a PostScript interpreter, uses to render text. These fonts are in addition to the fonts shared by Ghostscript and the X Window System. Bug Fix BZ# 1067294 Previously, the ghostscript-fonts package contained fonts with a restrictive license. With this update, the fonts with restricted rights causing a licensing problem are removed from the package. Users of ghostscript-fonts are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ghostscript-fonts |
Project APIs | Project APIs OpenShift Container Platform 4.16 Reference guide for project APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/project_apis/index |
3.8. Aggressive Link Power Management | 3.8. Aggressive Link Power Management Aggressive Link Power Management (ALPM) is a power-saving technique that helps the disk save power by setting a SATA link to the disk to a low-power setting during idle time (that is when there is no I/O). ALPM automatically sets the SATA link back to an active power state once I/O requests are queued to that link. Power savings introduced by ALPM come at the expense of disk latency. As such, you should only use ALPM if you expect the system to experience long periods of idle I/O time. ALPM is only available on SATA controllers that use the Advanced Host Controller Interface (AHCI). For more information about AHCI, refer to http://www.intel.com/technology/serialata/ahci.htm . When available, ALPM is enabled by default. ALPM has three modes: min_power This mode sets the link to its lowest power state (SLUMBER) when there is no I/O on the disk. This mode is useful for times when an extended period of idle time is expected. medium_power This mode sets the link to the second lowest power state (PARTIAL) when there is no I/O on the disk. This mode is designed to allow transitions in link power states (for example during times of intermittent heavy I/O and idle I/O) with as small impact on performance as possible. medium_power mode allows the link to transition between PARTIAL and fully-powered (that is "ACTIVE") states, depending on the load. Note that it is not possible to transition a link directly from PARTIAL to SLUMBER and back; in this case, either power state cannot transition to the other without transitioning through the ACTIVE state first. max_performance ALPM is disabled; the link does not enter any low-power state when there is no I/O on the disk. To check whether your SATA host adapters actually support ALPM you can check if the file /sys/class/scsi_host/host*/link_power_management_policy exists. To change the settings simply write the values described in this section to these files or display the files to check for the current setting. Important Setting ALPM to min_power or medium_power will automatically disable the "Hot Plug" feature. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/alpm |
5.7. Viewing Virtual Machines Pinned to a Host | 5.7. Viewing Virtual Machines Pinned to a Host You can view virtual machines pinned to a host even while the virtual machines are offline. Use the Pinned to Host list to see which virtual machines will be affected and which virtual machines will require a manual restart after the host becomes active again. Viewing Virtual Machines Pinned to a Host Click Compute Hosts . Click a host name to go to the details view. Click the Virtual Machines tab. Click Pinned to Host . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/viewing_virtual_machines_pinned_to_a_host |
19.4. Display Options | 19.4. Display Options This section provides information about display options. Disable Graphics -nographic VGA Card Emulation -vga <type> Supported types: cirrus - Cirrus Logic GD5446 Video card. std - Standard VGA card with Bochs VBE extensions. qxl - Spice paravirtual card. none - Disable VGA card. VNC Display -vnc <display>[,<option>[,<option>[,...]]] Supported display value: [<host>]:<port> unix:<path> share [allow-exclusive|force-shared|ignore] none - Supported with no other options specified. Supported options are: to =<port> reverse password tls x509 =</path/to/certificate/dir> - Supported when tls specified. x509verify =</path/to/certificate/dir> - Supported when tls specified. sasl acl Spice Desktop -spice option[,option[,...]] Supported options are: port =<number> addr =<addr> ipv4 ipv6 password =<secret> disable-ticketing disable-copy-paste tls-port =<number> x509-dir =</path/to/certificate/dir> x509-key-file =<file> x509-key-password =<file> x509-cert-file =<file> x509-cacert-file =<file> x509-dh-key-file =<file> tls-cipher =<list> tls-channel [main|display|cursor|inputs|record|playback] plaintext-channel [main|display|cursor|inputs|record|playback] image-compression =<compress> jpeg-wan-compression =<value> zlib-glz-wan-compression =<value> streaming-video =[off|all|filter] agent-mouse =[on|off] playback-compression =[on|off] seamless-migratio =[on|off] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sec-qemu_kvm_whitelist_display_options |
GitOps | GitOps OpenShift Container Platform 4.12 A declarative way to implement continuous deployment for cloud native applications. Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/gitops/index |
Chapter 3. Configuring Compute nodes for performance | Chapter 3. Configuring Compute nodes for performance As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal performance: CPU pinning : Pin virtual CPUs to physical CPUs. Emulator threads : Pin emulator threads associated with the instance to physical CPUs. Huge pages : Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages). Note Configuring any of these features creates an implicit NUMA topology on the instance if there is no NUMA topology already present. 3.1. Configuring CPU pinning on Compute nodes You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node. You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following: Designate Compute nodes for CPU pinning. Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes. Deploy the overcloud. Create a flavor for launching instances that require CPU pinning. Create a flavor for launching instances that use shared, or floating, CPUs. 3.1.1. Prerequisites You know the NUMA topology of your Compute node. 3.1.2. Designating Compute nodes for CPU pinning To designate Compute nodes for instances with pinned CPUs, you must create a new role file to configure the CPU pinning role, and configure a new overcloud flavor and CPU pinning resource class to use to tag the Compute nodes for CPU pinning. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_cpu_pinning.yaml that includes the Controller , Compute , and ComputeCPUPinning roles: Open roles_data_cpu_pinning.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeCPUPinning Role name name: Compute name: ComputeCPUPinning description Basic Compute Node role CPU Pinning Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputepinning-%index% deprecated_nic_config_name compute.yaml compute-cpu-pinning.yaml Register the CPU pinning Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Create the compute-cpu-pinning overcloud flavor for CPU pinning Compute nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Tag each bare metal node that you want to designate for CPU pinning with a custom CPU pinning resource class: Replace <node> with the ID of the bare metal node. Associate the compute-cpu-pinning flavor with the custom CPU pinning resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: Optional: If the network topology of the ComputeCPUPinning role is different from the network topology of your Compute role, then create a custom network interface template. For more information, see Custom network interface templates in the Advanced Overcloud Customization guide. If the network topology of the ComputeCPUPinning role is the same as the Compute role, then you can use the default network topology defined in compute.yaml . Register the Net::SoftwareConfig of the ComputeCPUPinning role in your network-environment.yaml file: Replace <cpu_pinning_net_top> with the name of the file that contains the network topology of the ComputeCPUPinning role, for example, compute.yaml to use the default network topology. Add the following parameters to the node-info.yaml file to specify the number of CPU pinning Compute nodes, and the flavor to use for the CPU pinning designated Compute nodes: To verify that the role was created, enter the following command: Example output: 3.1.3. Configuring Compute nodes for CPU pinning Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning: Table 3.1. Example of NUMA Topology NUMA Node 0 NUMA Node 1 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning. Procedure Create an environment file to configure Compute nodes to reserve cores for pinned instances, floating instances, and host processes, for example, cpu_pinning.yaml . To schedule instances with a NUMA topology on NUMA-capable Compute nodes, add NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter in your Compute environment file, if not already present: For more information on NUMATopologyFilter , see Compute scheduler filters . To reserve physical CPU cores for the dedicated instances, add the following configuration to cpu_pinning.yaml : To reserve physical CPU cores for the shared instances, add the following configuration to cpu_pinning.yaml : To specify the amount of RAM to reserve for host processes, add the following configuration to cpu_pinning.yaml : Replace <ram> with the amount of RAM to reserve in MB. To ensure that host processes do not run on the CPU cores reserved for instances, set the parameter IsolCpusList to the CPU cores you have reserved for instances: Specify the value of the IsolCpusList parameter using a list, or ranges, of CPU indices separated by a comma. Add your new role and environment files to the stack with your other environment files and deploy the overcloud: 3.1.4. Creating a dedicated CPU flavor for instances To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances. Prerequisites Simultaneous multithreading (SMT) is enabled on the host. The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Source the overcloudrc file: Create a flavor for instances that require CPU pinning: To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated : To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require : Note If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The prefer policy is the default policy that ensures that thread siblings are used when available. If you use hw:cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. Verification To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance: To verify correct placement of the new instance, enter the following command and check for OS-EXT-SRV-ATTR:hypervisor_hostname in the output: 3.1.5. Creating a shared CPU flavor for instances To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances. Prerequisites The Compute node is configured to reserve physical CPU cores for the shared CPUs. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Source the overcloudrc file: Create a flavor for instances that do not require CPU pinning: To request floating CPUs, set the hw:cpu_policy property of the flavor to shared : Verification To verify the flavor creates an instance that uses the shared CPUs, use your new flavor to launch an instance: To verify correct placement of the new instance, enter the following command and check for OS-EXT-SRV-ATTR:hypervisor_hostname in the output: 3.1.6. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT) If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling. For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings: Thread sibling 1: logical CPU cores 0 and 2 Thread sibling 2: logical CPU cores 1 and 3 In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared. The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list , where N is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings: The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core: 3.1.7. Additional resources Discovering your NUMA node topology in the Network Functions Virtualization Planning and Configuration Guide . CPUs and NUMA nodes in the Network Functions Virtualization Product Guide . 3.2. Configuring emulator threads Compute nodes have overhead tasks associated with the hypervisor for each instance, known as emulator threads. By default, emulator threads run on the same CPUs as the instance, which impacts the performance of the instance. You can configure the emulator thread policy to run emulator threads on separate CPUs to those the instance uses. Note To avoid packet loss, you must never preempt the vCPUs in an NFV deployment. Procedure Log in to the undercloud as the stack user. Open your Compute environment file. To reserve physical CPU cores for instances that require CPU pinning, configure the NovaComputeCpuDedicatedSet parameter in the Compute environment file. For example, the following configuration sets the dedicated CPUs on a Compute node with a 32-core CPU: For more information, see Configuring CPU pinning on the Compute nodes . To reserve physical CPU cores for the emulator threads, configure the NovaComputeCpuSharedSet parameter in the Compute environment file. For example, the following configuration sets the shared CPUs on a Compute node with a 32-core CPU: Note The Compute scheduler also uses the CPUs in the shared set for instances that run on shared, or floating, CPUs. For more information, see Configuring CPU pinning on Compute nodes Add the Compute scheduler filter NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Configure a flavor that runs emulator threads for the instance on a dedicated CPU, which is selected from the shared CPUs configured using NovaComputeCpuSharedSet : For more information about configuration options for hw:emulator_threads_policy , see Emulator threads policy in Flavor metadata . 3.3. Configuring huge pages on Compute nodes As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages. Procedure Open your Compute environment file. Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances: Replace the size value for each node with the size of the allocated huge page. Set to one of the following valid values: 2048 (for 2MB) 1GB Replace the count value for each node with the number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2. Configure huge pages on the Compute nodes: Note If you configure multiple huge page sizes, you must also mount the huge page folders during first boot. For more information, see Mounting multiple huge page folders during first boot . Optional: To allow instances to allocate 1GB huge pages, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags , to include pdpe1gb : Note CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages. You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation. You only need to set NovaLibvirtCPUModelExtraFlags to pdpe1gb when NovaLibvirtCPUMode is set to host-model or custom . If the host supports pdpe1gb , and host-passthrough is used as the NovaLibvirtCPUMode , then you do not need to set pdpe1gb as a NovaLibvirtCPUModelExtraFlags . The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws . To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags, NovaLibvirtCPUModelExtraFlags , to include +pcid : Tip For more information, see Reducing the performance impact of Meltdown CVE fixes for OpenStack guests with "PCID" CPU feature flag . Add NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 3.3.1. Creating a huge pages flavor for instances To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances. Prerequisites The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes . Procedure Create a flavor for instances that require huge pages: To request huge pages, set the hw:mem_page_size property of the flavor to the required size: Set hw:mem_page_size to one of the following valid values: large - Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small - (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any - Selects the largest available huge page size, as determined by the libvirt driver. <pagesize>: (String) Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance: The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error. 3.3.2. Mounting multiple huge page folders during first boot You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. The first boot process adds the heat template configuration to all nodes the first time you boot the nodes. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts. Procedure Create a first boot template file, hugepages.yaml , that runs a script to create the mounts for the huge page folders. You can use the OS::TripleO::MultipartMime resource type to send the configuration script: The config script in this template performs the following tasks: Filters the hosts to create the mounts for the huge page folders on, by specifying hostnames that match 'co?mp' . You can update the filter grep pattern for specific computes as required. Masks the default dev-hugepages.mount systemd unit file to enable new mounts to be created using the page size. Ensures that the folders are created first. Creates systemd mount units for each pagesize . Runs systemd daemon-reload after the first loop, to include the newly created unit files. Enables each mount for 2M and 1G pagesizes. You can update this loop to include additional pagesizes, as required. Optional: The /dev folder is automatically bind mounted to the nova_compute and nova_libvirt containers. If you have used a different destination for the huge page mounts, then you need to pass the mounts to the the nova_compute and nova_libvirt containers: Register your heat template as the OS::TripleO::NodeUserData resource type in your ~/templates/firstboot.yaml environment file: Important You can only register the NodeUserData resources to one heat template for each resource. Subsequent usage overrides the heat template to use. Add your first boot environment file to the stack with your other environment files and deploy the overcloud: 3.4. Configuring Compute nodes to use file-backed memory for instances You can use file-backed memory to expand your Compute node memory capacity, by allocating files within the libvirt memory backing directory as instance memory. You can configure the amount of host disk that is available for instance memory, and the location on the disk of the instance memory files. The Compute service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory. To use file-backed memory for instances, you must enable file-backed memory on the Compute node. Limitations You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled. File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled. File-backed memory is not compatible with memory overcommit. You cannot reserve memory for host processes using NovaReservedHostMemory . When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory. Prerequisites NovaRAMAllocationRatio must be set to "1.0" on the node and any host aggregate the node is added to. NovaReservedHostMemory must be set to "0". Procedure Open your Compute environment file. Configure the amount of host disk space, in MiB, to make available for instance RAM, by adding the following parameter to your Compute environment file: Optional: To configure the directory to store the memory backing files, set the QemuMemoryBackingDir parameter in your Compute environment file. If not set, the memory backing directory defaults to /var/lib/libvirt/qemu/ram/ . Note You must locate your backing store in a directory at or above the default directory location, /var/lib/libvirt/qemu/ram/ . You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 3.4.1. Changing the memory backing directory host disk You can move the memory backing directory from the default primary disk location to an alternative disk. Procedure Create a file system on the alternative backing device. For example, enter the following command to create an ext4 filesystem on /dev/sdb : Mount the backing device. For example, enter the following command to mount /dev/sdb on the default libvirt memory backing directory: Note The mount point must match the value of the QemuMemoryBackingDir parameter. | [
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_cpu_pinning.yaml Compute:ComputeCPUPinning Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> compute-cpu-pinning",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.CPU-PINNING <node>",
"(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_CPU_PINNING=1 compute-cpu-pinning",
"(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-cpu-pinning",
"resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::ComputeCPUPinning::Net::SoftwareConfig: /home/stack/templates/nic-configs/<cpu_pinning_net_top>.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml",
"parameter_defaults: OvercloudComputeCPUPinningFlavor: compute-cpu-pinning ComputeCPUPinningCount: 3",
"(undercloud)USD openstack baremetal node list --long -c \"UUID\" -c \"Instance UUID\" -c \"Resource Class\" -c \"Provisioning State\" -c \"Power State\" -c \"Last Error\" -c \"Fault\" -c \"Name\" -f json",
"[ { \"Fault\": null, \"Instance UUID\": \"e8e60d37-d7c7-4210-acf7-f04b245582ea\", \"Last Error\": null, \"Name\": \"compute-0\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"baremetal.CPU-PINNING\", \"UUID\": \"b5a9ac58-63a7-49ba-b4ad-33d84000ccb4\" }, { \"Fault\": null, \"Instance UUID\": \"3ec34c0b-c4f5-4535-9bd3-8a1649d2e1bd\", \"Last Error\": null, \"Name\": \"compute-1\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"compute\", \"UUID\": \"432e7f86-8da2-44a6-9b14-dfacdf611366\" }, { \"Fault\": null, \"Instance UUID\": \"4992c2da-adde-41b3-bef1-3a5b8e356fc0\", \"Last Error\": null, \"Name\": \"controller-0\", \"Power State\": \"power on\", \"Provisioning State\": \"active\", \"Resource Class\": \"controller\", \"UUID\": \"474c2fc8-b884-4377-b6d7-781082a3a9c0\" } ]",
"parameter_defaults: NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']",
"parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuDedicatedSet: 1,3,5,7",
"parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuSharedSet: 2,6",
"parameter_defaults: ComputeCPUPinningParameters: NovaReservedHostMemory: <ram>",
"parameter_defaults: ComputeCPUPinningParameters: IsolCpusList: 1-3,5-7",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_cpu_pinning.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/cpu_pinning.yaml -e /home/stack/templates/node-info.yaml",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> pinned_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated pinned_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_thread_policy=require pinned_cpus",
"(overcloud)USD openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance",
"(overcloud)USD openstack server show pinned_cpu_instance",
"(undercloud)USD source ~/overcloudrc",
"(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=shared floating_cpus",
"(overcloud)USD openstack server create --flavor floating_cpus --image <image> floating_cpu_instance",
"(overcloud)USD openstack server show floating_cpu_instance",
"grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u",
"/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3",
"parameter_defaults: NovaComputeCpuDedicatedSet: 2-15,18-31",
"parameter_defaults: NovaComputeCpuSharedSet: 0,1,16,17",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud)USD openstack flavor set --property hw:cpu_policy=dedicated --property hw:emulator_threads_policy=share dedicated_emulator_threads",
"parameter_defaults: ComputeParameters: NovaReservedHugePages: [\"node:0,size:1GB,count:1\",\"node:1,size:1GB,count:1\"]",
"parameter_defaults: ComputeParameters: KernelArgs: \"default_hugepagesz=1GB hugepagesz=1G hugepages=32\"",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> huge_pages",
"openstack flavor set huge_pages --property hw:mem_page_size=1GB",
"openstack server create --flavor huge_pages --image <image> huge_pages_instance",
"heat_template_version: <version> description: > Huge pages configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: hugepages_config} hugepages_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash hostname | grep -qiE 'co?mp' || exit 0 systemctl mask dev-hugepages.mount || true for pagesize in 2M 1G;do if ! [ -d \"/dev/hugepagesUSD{pagesize}\" ]; then mkdir -p \"/dev/hugepagesUSD{pagesize}\" cat << EOF > /etc/systemd/system/dev-hugepagesUSD{pagesize}.mount [Unit] Description=USD{pagesize} Huge Pages File System Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems DefaultDependencies=no Before=sysinit.target ConditionPathExists=/sys/kernel/mm/hugepages ConditionCapability=CAP_SYS_ADMIN ConditionVirtualization=!private-users [Mount] What=hugetlbfs Where=/dev/hugepagesUSD{pagesize} Type=hugetlbfs Options=pagesize=USD{pagesize} [Install] WantedBy = sysinit.target EOF fi done systemctl daemon-reload for pagesize in 2M 1G;do systemctl enable --now dev-hugepagesUSD{pagesize}.mount done outputs: OS::stack_id: value: {get_resource: userdata}",
"parameter_defaults NovaComputeOptVolumes: - /opt/dev:/opt/dev NovaLibvirtOptVolumes: - /opt/dev:/opt/dev",
"resource_registry: OS::TripleO::NodeUserData: ./hugepages.yaml",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/firstboot.yaml",
"parameter_defaults: NovaLibvirtFileBackedMemory: 102400",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"mkfs.ext4 /dev/sdb",
"mount /dev/sdb /var/lib/libvirt/qemu/ram"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-compute-nodes-for-performance_compute-performance |
Chapter 18. Remote Management of Guests | Chapter 18. Remote Management of Guests This section explains how to remotely manage your guests. 18.1. Transport Modes For remote management, libvirt supports the following transport modes: Transport Layer Security (TLS) Transport Layer Security TLS 1.0 (SSL 3.1) authenticated and encrypted TCP/IP socket, usually listening on a public port number. To use this, you will need to generate client and server certificates. The standard port is 16514. For detailed instructions, see Section 18.3, "Remote Management over TLS and SSL" . SSH Transported over a Secure Shell protocol (SSH) connection. The libvirt daemon ( libvirtd ) must be running on the remote machine. Port 22 must be open for SSH access. You should use some sort of SSH key management (for example, the ssh-agent utility) or you will be prompted for a password. For detailed instructions, see Section 18.2, "Remote Management with SSH" . UNIX Sockets UNIX domain sockets are only accessible on the local machine. Sockets are not encrypted, and use UNIX permissions or SELinux for authentication. The standard socket names are /var/run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (for read-only connections). ext The ext parameter is used for any external program which can make a connection to the remote machine by means outside the scope of libvirt. This parameter is unsupported. TCP Unencrypted TCP/IP socket. Not recommended for production use, this is normally disabled, but an administrator can enable it for testing or use over a trusted network. The default port is 16509. The default transport, if no other is specified, is TLS. Remote URIs A Uniform Resource Identifier (URI) is used by virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts. Remote URIs are formed by taking ordinary local URIs and adding a host name or a transport name, or both. As a special case, using a URI scheme of 'remote' will tell the remote libvirtd server to probe for the optimal hypervisor driver. This is equivalent to passing a NULL URI for a local connection libvirt URIs take the general form (content in square brackets, "[]", represents optional functions): Note that if the hypervisor (driver) is QEMU, the path is mandatory. The following are examples of valid remote URIs: qemu://hostname/ The transport method or the host name must be provided to target an external location. For more information, see the libvirt upstream documentation . Examples of remote management parameters Connect to a remote KVM host named host2 , using SSH transport and the SSH user name virtuser . The connect command for each is connect [ URI ] [--readonly] . For more information about the virsh connect command, see Section 20.4, "Connecting to the Hypervisor with virsh Connect" Connect to a remote KVM hypervisor on the host named host2 using TLS. Testing examples Connect to the local KVM hypervisor with a non-standard UNIX socket. The full path to the UNIX socket is supplied explicitly in this case. Connect to the libvirt daemon with an non-encrypted TCP/IP connection to the server with the IP address 10.1.1.10 on port 5000. This uses the test driver with default settings. Extra URI Parameters Extra parameters can be appended to remote URIs. The table below covers the recognized parameters. All other parameters are ignored. Note that parameter values must be URI-escaped (that is, a question mark (?) is appended before the parameter and special characters are converted into the URI format). Table 18.1. Extra URI parameters Name Transport mode Description Example usage name all modes The name passed to the remote virConnectOpen function. The name is normally formed by removing transport , hostname , port number , username , and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. name=qemu:///system command ssh and ext The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command. command=/opt/openssh/bin/ssh socket unix and ssh The path to the UNIX domain socket, which overrides the default. For ssh transport, this is passed to the remote netcat command (see netcat). socket=/opt/libvirt/run/libvirt/libvirt-sock no_verify tls If set to a non-zero value, this disables client checks of the server's certificate. Note that to disable server checks of the client's certificate or IP address you must change the libvirtd configuration. no_verify=1 no_tty ssh If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically . Use this when you do not have access to a terminal. no_tty=1 | [
"driver[+transport]://[username@][hostname][:port]/path[?extraparameters]",
"qemu+ssh://virtuser@host2/",
"qemu://host2/",
"qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock",
"test+tcp://10.1.1.10:5000/default"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Remote_management_of_guests |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.